00:00:00.000 Started by upstream project "autotest-per-patch" build number 126211 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.086 Using shallow fetch with depth 1 00:00:00.086 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.086 > git --version # timeout=10 00:00:00.124 > git --version # 'git version 2.39.2' 00:00:00.124 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.301 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.314 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.326 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.326 > git config core.sparsecheckout # timeout=10 00:00:03.337 > git read-tree -mu HEAD # timeout=10 00:00:03.355 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.376 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.376 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.461 [Pipeline] Start of Pipeline 00:00:03.480 [Pipeline] library 00:00:03.481 Loading library shm_lib@master 00:00:03.482 Library shm_lib@master is cached. Copying from home. 00:00:03.499 [Pipeline] node 00:00:03.507 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.510 [Pipeline] { 00:00:03.520 [Pipeline] catchError 00:00:03.521 [Pipeline] { 00:00:03.534 [Pipeline] wrap 00:00:03.542 [Pipeline] { 00:00:03.549 [Pipeline] stage 00:00:03.550 [Pipeline] { (Prologue) 00:00:03.761 [Pipeline] sh 00:00:04.044 + logger -p user.info -t JENKINS-CI 00:00:04.064 [Pipeline] echo 00:00:04.066 Node: GP11 00:00:04.074 [Pipeline] sh 00:00:04.373 [Pipeline] setCustomBuildProperty 00:00:04.385 [Pipeline] echo 00:00:04.386 Cleanup processes 00:00:04.390 [Pipeline] sh 00:00:04.669 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.669 2019867 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.682 [Pipeline] sh 00:00:04.967 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.968 ++ grep -v 'sudo pgrep' 00:00:04.968 ++ awk '{print $1}' 00:00:04.968 + sudo kill -9 00:00:04.968 + true 00:00:04.986 [Pipeline] cleanWs 00:00:04.998 [WS-CLEANUP] Deleting project workspace... 00:00:04.998 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.004 [WS-CLEANUP] done 00:00:05.010 [Pipeline] setCustomBuildProperty 00:00:05.026 [Pipeline] sh 00:00:05.308 + sudo git config --global --replace-all safe.directory '*' 00:00:05.387 [Pipeline] httpRequest 00:00:05.416 [Pipeline] echo 00:00:05.417 Sorcerer 10.211.164.101 is alive 00:00:05.424 [Pipeline] httpRequest 00:00:05.428 HttpMethod: GET 00:00:05.428 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.429 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.446 Response Code: HTTP/1.1 200 OK 00:00:05.447 Success: Status code 200 is in the accepted range: 200,404 00:00:05.447 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.825 [Pipeline] sh 00:00:08.109 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.121 [Pipeline] httpRequest 00:00:08.134 [Pipeline] echo 00:00:08.135 Sorcerer 10.211.164.101 is alive 00:00:08.141 [Pipeline] httpRequest 00:00:08.146 HttpMethod: GET 00:00:08.146 URL: http://10.211.164.101/packages/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:08.147 Sending request to url: http://10.211.164.101/packages/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:08.149 Response Code: HTTP/1.1 200 OK 00:00:08.150 Success: Status code 200 is in the accepted range: 200,404 00:00:08.150 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:25.149 [Pipeline] sh 00:00:25.434 + tar --no-same-owner -xf spdk_d8f06a5fec162a535d3a23e9ae8fc57eb4431c82.tar.gz 00:00:27.968 [Pipeline] sh 00:00:28.246 + git -C spdk log --oneline -n5 00:00:28.246 d8f06a5fe scripts/pkgdep: Drop support for downloading shfmt binaries 00:00:28.246 719d03c6a sock/uring: only register net impl if supported 00:00:28.246 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:28.246 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:28.246 6c7c1f57e accel: add sequence outstanding stat 00:00:28.256 [Pipeline] } 00:00:28.268 [Pipeline] // stage 00:00:28.274 [Pipeline] stage 00:00:28.276 [Pipeline] { (Prepare) 00:00:28.288 [Pipeline] writeFile 00:00:28.301 [Pipeline] sh 00:00:28.580 + logger -p user.info -t JENKINS-CI 00:00:28.590 [Pipeline] sh 00:00:28.867 + logger -p user.info -t JENKINS-CI 00:00:28.877 [Pipeline] sh 00:00:29.161 + cat autorun-spdk.conf 00:00:29.161 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.161 SPDK_TEST_NVMF=1 00:00:29.161 SPDK_TEST_NVME_CLI=1 00:00:29.161 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.161 SPDK_TEST_NVMF_NICS=e810 00:00:29.161 SPDK_TEST_VFIOUSER=1 00:00:29.161 SPDK_RUN_UBSAN=1 00:00:29.161 NET_TYPE=phy 00:00:29.168 RUN_NIGHTLY=0 00:00:29.176 [Pipeline] readFile 00:00:29.198 [Pipeline] withEnv 00:00:29.200 [Pipeline] { 00:00:29.210 [Pipeline] sh 00:00:29.491 + set -ex 00:00:29.491 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:29.491 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.491 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.491 ++ SPDK_TEST_NVMF=1 00:00:29.491 ++ SPDK_TEST_NVME_CLI=1 00:00:29.491 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.491 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.491 ++ SPDK_TEST_VFIOUSER=1 00:00:29.491 ++ SPDK_RUN_UBSAN=1 00:00:29.491 ++ NET_TYPE=phy 00:00:29.491 ++ RUN_NIGHTLY=0 00:00:29.491 + case $SPDK_TEST_NVMF_NICS in 00:00:29.491 + DRIVERS=ice 00:00:29.491 + [[ tcp == \r\d\m\a ]] 00:00:29.491 + [[ -n ice ]] 00:00:29.491 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.491 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:29.491 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:29.491 rmmod: ERROR: Module irdma is not currently loaded 00:00:29.491 rmmod: ERROR: Module i40iw is not currently loaded 00:00:29.491 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:29.491 + true 00:00:29.491 + for D in $DRIVERS 00:00:29.491 + sudo modprobe ice 00:00:29.491 + exit 0 00:00:29.500 [Pipeline] } 00:00:29.517 [Pipeline] // withEnv 00:00:29.521 [Pipeline] } 00:00:29.537 [Pipeline] // stage 00:00:29.547 [Pipeline] catchError 00:00:29.548 [Pipeline] { 00:00:29.561 [Pipeline] timeout 00:00:29.561 Timeout set to expire in 50 min 00:00:29.562 [Pipeline] { 00:00:29.576 [Pipeline] stage 00:00:29.578 [Pipeline] { (Tests) 00:00:29.590 [Pipeline] sh 00:00:29.871 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.872 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.872 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.872 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:29.872 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.872 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.872 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:29.872 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.872 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.872 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.872 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:29.872 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.872 + source /etc/os-release 00:00:29.872 ++ NAME='Fedora Linux' 00:00:29.872 ++ VERSION='38 (Cloud Edition)' 00:00:29.872 ++ ID=fedora 00:00:29.872 ++ VERSION_ID=38 00:00:29.872 ++ VERSION_CODENAME= 00:00:29.872 ++ PLATFORM_ID=platform:f38 00:00:29.872 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.872 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.872 ++ LOGO=fedora-logo-icon 00:00:29.872 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.872 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.872 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.872 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.872 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.872 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.872 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.872 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.872 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.872 ++ SUPPORT_END=2024-05-14 00:00:29.872 ++ VARIANT='Cloud Edition' 00:00:29.872 ++ VARIANT_ID=cloud 00:00:29.872 + uname -a 00:00:29.872 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.872 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:30.810 Hugepages 00:00:30.810 node hugesize free / total 00:00:30.810 node0 1048576kB 0 / 0 00:00:30.810 node0 2048kB 0 / 0 00:00:30.810 node1 1048576kB 0 / 0 00:00:30.810 node1 2048kB 0 / 0 00:00:30.810 00:00:30.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:30.810 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:30.810 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:30.811 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:30.811 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:30.811 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:30.811 + rm -f /tmp/spdk-ld-path 00:00:30.811 + source autorun-spdk.conf 00:00:30.811 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.811 ++ SPDK_TEST_NVMF=1 00:00:30.811 ++ SPDK_TEST_NVME_CLI=1 00:00:30.811 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.811 ++ SPDK_TEST_NVMF_NICS=e810 00:00:30.811 ++ SPDK_TEST_VFIOUSER=1 00:00:30.811 ++ SPDK_RUN_UBSAN=1 00:00:30.811 ++ NET_TYPE=phy 00:00:30.811 ++ RUN_NIGHTLY=0 00:00:30.811 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:30.811 + [[ -n '' ]] 00:00:30.811 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.811 + for M in /var/spdk/build-*-manifest.txt 00:00:30.811 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:30.811 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.811 + for M in /var/spdk/build-*-manifest.txt 00:00:30.811 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:30.811 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:30.811 ++ uname 00:00:30.811 + [[ Linux == \L\i\n\u\x ]] 00:00:30.811 + sudo dmesg -T 00:00:30.811 + sudo dmesg --clear 00:00:31.069 + dmesg_pid=2020543 00:00:31.069 + [[ Fedora Linux == FreeBSD ]] 00:00:31.069 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.069 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:31.069 + sudo dmesg -Tw 00:00:31.069 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:31.069 + [[ -x /usr/src/fio-static/fio ]] 00:00:31.069 + export FIO_BIN=/usr/src/fio-static/fio 00:00:31.069 + FIO_BIN=/usr/src/fio-static/fio 00:00:31.069 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:31.069 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:31.069 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:31.069 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.069 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:31.069 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:31.069 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.069 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:31.069 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.069 Test configuration: 00:00:31.069 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.069 SPDK_TEST_NVMF=1 00:00:31.069 SPDK_TEST_NVME_CLI=1 00:00:31.069 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.069 SPDK_TEST_NVMF_NICS=e810 00:00:31.069 SPDK_TEST_VFIOUSER=1 00:00:31.069 SPDK_RUN_UBSAN=1 00:00:31.069 NET_TYPE=phy 00:00:31.069 RUN_NIGHTLY=0 17:23:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:31.069 17:23:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:31.069 17:23:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:31.069 17:23:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:31.069 17:23:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.069 17:23:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.069 17:23:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.069 17:23:26 -- paths/export.sh@5 -- $ export PATH 00:00:31.069 17:23:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:31.069 17:23:26 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:31.069 17:23:26 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:31.069 17:23:26 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721057006.XXXXXX 00:00:31.069 17:23:26 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721057006.pyqcGX 00:00:31.069 17:23:26 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:31.069 17:23:26 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:31.069 17:23:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:31.069 17:23:26 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:31.069 17:23:26 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:31.069 17:23:26 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:31.069 17:23:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:31.069 17:23:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:31.069 17:23:26 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:31.069 17:23:26 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:31.069 17:23:26 -- pm/common@17 -- $ local monitor 00:00:31.069 17:23:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.069 17:23:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.070 17:23:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.070 17:23:26 -- pm/common@21 -- $ date +%s 00:00:31.070 17:23:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:31.070 17:23:26 -- pm/common@21 -- $ date +%s 00:00:31.070 17:23:26 -- pm/common@25 -- $ sleep 1 00:00:31.070 17:23:26 -- pm/common@21 -- $ date +%s 00:00:31.070 17:23:26 -- pm/common@21 -- $ date +%s 00:00:31.070 17:23:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721057006 00:00:31.070 17:23:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721057006 00:00:31.070 17:23:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721057006 00:00:31.070 17:23:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721057006 00:00:31.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721057006_collect-vmstat.pm.log 00:00:31.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721057006_collect-cpu-load.pm.log 00:00:31.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721057006_collect-cpu-temp.pm.log 00:00:31.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721057006_collect-bmc-pm.bmc.pm.log 00:00:32.008 17:23:27 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:32.008 17:23:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:32.008 17:23:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:32.008 17:23:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.008 17:23:27 -- spdk/autobuild.sh@16 -- $ date -u 00:00:32.008 Mon Jul 15 03:23:27 PM UTC 2024 00:00:32.008 17:23:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:32.008 v24.09-pre-203-gd8f06a5fe 00:00:32.008 17:23:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:32.008 17:23:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:32.008 17:23:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:32.008 17:23:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:32.008 17:23:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:32.008 17:23:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.008 ************************************ 00:00:32.008 START TEST ubsan 00:00:32.008 ************************************ 00:00:32.008 17:23:27 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:32.008 using ubsan 00:00:32.008 00:00:32.008 real 0m0.000s 00:00:32.008 user 0m0.000s 00:00:32.008 sys 0m0.000s 00:00:32.008 17:23:27 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:32.008 17:23:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:32.008 ************************************ 00:00:32.008 END TEST ubsan 00:00:32.008 ************************************ 00:00:32.008 17:23:27 -- common/autotest_common.sh@1142 -- $ return 0 00:00:32.008 17:23:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:32.008 17:23:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:32.008 17:23:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:32.008 17:23:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:32.269 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:32.269 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:32.527 Using 'verbs' RDMA provider 00:00:43.154 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:53.212 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:53.212 Creating mk/config.mk...done. 00:00:53.212 Creating mk/cc.flags.mk...done. 00:00:53.212 Type 'make' to build. 00:00:53.212 17:23:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:53.212 17:23:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:53.212 17:23:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:53.212 17:23:47 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.212 ************************************ 00:00:53.212 START TEST make 00:00:53.212 ************************************ 00:00:53.212 17:23:47 make -- common/autotest_common.sh@1123 -- $ make -j48 00:00:53.212 make[1]: Nothing to be done for 'all'. 00:00:54.609 The Meson build system 00:00:54.609 Version: 1.3.1 00:00:54.609 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:54.609 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:54.609 Build type: native build 00:00:54.609 Project name: libvfio-user 00:00:54.609 Project version: 0.0.1 00:00:54.609 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:54.609 C linker for the host machine: cc ld.bfd 2.39-16 00:00:54.609 Host machine cpu family: x86_64 00:00:54.609 Host machine cpu: x86_64 00:00:54.609 Run-time dependency threads found: YES 00:00:54.609 Library dl found: YES 00:00:54.609 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:54.609 Run-time dependency json-c found: YES 0.17 00:00:54.609 Run-time dependency cmocka found: YES 1.1.7 00:00:54.609 Program pytest-3 found: NO 00:00:54.609 Program flake8 found: NO 00:00:54.609 Program misspell-fixer found: NO 00:00:54.609 Program restructuredtext-lint found: NO 00:00:54.609 Program valgrind found: YES (/usr/bin/valgrind) 00:00:54.609 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:54.609 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:54.609 Compiler for C supports arguments -Wwrite-strings: YES 00:00:54.609 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:54.609 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:54.609 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:54.609 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:54.609 Build targets in project: 8 00:00:54.609 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:54.609 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:54.609 00:00:54.609 libvfio-user 0.0.1 00:00:54.609 00:00:54.609 User defined options 00:00:54.609 buildtype : debug 00:00:54.609 default_library: shared 00:00:54.609 libdir : /usr/local/lib 00:00:54.609 00:00:54.609 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:55.185 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:55.185 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:55.185 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:55.185 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:55.448 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:55.448 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:55.448 [6/37] Compiling C object samples/null.p/null.c.o 00:00:55.448 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:55.448 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:55.448 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:55.448 [10/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:55.448 [11/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:55.448 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:55.448 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:55.448 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:55.448 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:55.448 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:55.448 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:55.448 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:55.448 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:55.448 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:55.448 [21/37] Compiling C object samples/server.p/server.c.o 00:00:55.448 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:55.448 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:55.448 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:55.448 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:55.709 [26/37] Compiling C object samples/client.p/client.c.o 00:00:55.709 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:55.709 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:00:55.709 [29/37] Linking target samples/client 00:00:55.709 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:55.709 [31/37] Linking target test/unit_tests 00:00:55.709 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:55.970 [33/37] Linking target samples/server 00:00:55.970 [34/37] Linking target samples/shadow_ioeventfd_server 00:00:55.970 [35/37] Linking target samples/lspci 00:00:55.970 [36/37] Linking target samples/gpio-pci-idio-16 00:00:55.970 [37/37] Linking target samples/null 00:00:55.970 INFO: autodetecting backend as ninja 00:00:55.970 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:55.970 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.545 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:56.545 ninja: no work to do. 00:01:01.820 The Meson build system 00:01:01.820 Version: 1.3.1 00:01:01.820 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:01.820 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:01.820 Build type: native build 00:01:01.820 Program cat found: YES (/usr/bin/cat) 00:01:01.820 Project name: DPDK 00:01:01.820 Project version: 24.03.0 00:01:01.820 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:01.820 C linker for the host machine: cc ld.bfd 2.39-16 00:01:01.820 Host machine cpu family: x86_64 00:01:01.820 Host machine cpu: x86_64 00:01:01.820 Message: ## Building in Developer Mode ## 00:01:01.820 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:01.820 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:01.820 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:01.820 Program python3 found: YES (/usr/bin/python3) 00:01:01.820 Program cat found: YES (/usr/bin/cat) 00:01:01.820 Compiler for C supports arguments -march=native: YES 00:01:01.820 Checking for size of "void *" : 8 00:01:01.820 Checking for size of "void *" : 8 (cached) 00:01:01.820 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:01.820 Library m found: YES 00:01:01.820 Library numa found: YES 00:01:01.820 Has header "numaif.h" : YES 00:01:01.820 Library fdt found: NO 00:01:01.820 Library execinfo found: NO 00:01:01.820 Has header "execinfo.h" : YES 00:01:01.820 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:01.820 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:01.820 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:01.820 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:01.820 Run-time dependency openssl found: YES 3.0.9 00:01:01.820 Run-time dependency libpcap found: YES 1.10.4 00:01:01.820 Has header "pcap.h" with dependency libpcap: YES 00:01:01.820 Compiler for C supports arguments -Wcast-qual: YES 00:01:01.820 Compiler for C supports arguments -Wdeprecated: YES 00:01:01.820 Compiler for C supports arguments -Wformat: YES 00:01:01.820 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:01.820 Compiler for C supports arguments -Wformat-security: NO 00:01:01.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:01.820 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:01.820 Compiler for C supports arguments -Wnested-externs: YES 00:01:01.821 Compiler for C supports arguments -Wold-style-definition: YES 00:01:01.821 Compiler for C supports arguments -Wpointer-arith: YES 00:01:01.821 Compiler for C supports arguments -Wsign-compare: YES 00:01:01.821 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:01.821 Compiler for C supports arguments -Wundef: YES 00:01:01.821 Compiler for C supports arguments -Wwrite-strings: YES 00:01:01.821 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:01.821 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:01.821 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:01.821 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:01.821 Program objdump found: YES (/usr/bin/objdump) 00:01:01.821 Compiler for C supports arguments -mavx512f: YES 00:01:01.821 Checking if "AVX512 checking" compiles: YES 00:01:01.821 Fetching value of define "__SSE4_2__" : 1 00:01:01.821 Fetching value of define "__AES__" : 1 00:01:01.821 Fetching value of define "__AVX__" : 1 00:01:01.821 Fetching value of define "__AVX2__" : (undefined) 00:01:01.821 Fetching value of define "__AVX512BW__" : (undefined) 00:01:01.821 Fetching value of define "__AVX512CD__" : (undefined) 00:01:01.821 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:01.821 Fetching value of define "__AVX512F__" : (undefined) 00:01:01.821 Fetching value of define "__AVX512VL__" : (undefined) 00:01:01.821 Fetching value of define "__PCLMUL__" : 1 00:01:01.821 Fetching value of define "__RDRND__" : 1 00:01:01.821 Fetching value of define "__RDSEED__" : (undefined) 00:01:01.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:01.821 Fetching value of define "__znver1__" : (undefined) 00:01:01.821 Fetching value of define "__znver2__" : (undefined) 00:01:01.821 Fetching value of define "__znver3__" : (undefined) 00:01:01.821 Fetching value of define "__znver4__" : (undefined) 00:01:01.821 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:01.821 Message: lib/log: Defining dependency "log" 00:01:01.821 Message: lib/kvargs: Defining dependency "kvargs" 00:01:01.821 Message: lib/telemetry: Defining dependency "telemetry" 00:01:01.821 Checking for function "getentropy" : NO 00:01:01.821 Message: lib/eal: Defining dependency "eal" 00:01:01.821 Message: lib/ring: Defining dependency "ring" 00:01:01.821 Message: lib/rcu: Defining dependency "rcu" 00:01:01.821 Message: lib/mempool: Defining dependency "mempool" 00:01:01.821 Message: lib/mbuf: Defining dependency "mbuf" 00:01:01.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:01.821 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.821 Compiler for C supports arguments -mpclmul: YES 00:01:01.821 Compiler for C supports arguments -maes: YES 00:01:01.821 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:01.821 Compiler for C supports arguments -mavx512bw: YES 00:01:01.821 Compiler for C supports arguments -mavx512dq: YES 00:01:01.821 Compiler for C supports arguments -mavx512vl: YES 00:01:01.821 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:01.821 Compiler for C supports arguments -mavx2: YES 00:01:01.821 Compiler for C supports arguments -mavx: YES 00:01:01.821 Message: lib/net: Defining dependency "net" 00:01:01.821 Message: lib/meter: Defining dependency "meter" 00:01:01.821 Message: lib/ethdev: Defining dependency "ethdev" 00:01:01.821 Message: lib/pci: Defining dependency "pci" 00:01:01.821 Message: lib/cmdline: Defining dependency "cmdline" 00:01:01.821 Message: lib/hash: Defining dependency "hash" 00:01:01.821 Message: lib/timer: Defining dependency "timer" 00:01:01.821 Message: lib/compressdev: Defining dependency "compressdev" 00:01:01.821 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:01.821 Message: lib/dmadev: Defining dependency "dmadev" 00:01:01.821 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:01.821 Message: lib/power: Defining dependency "power" 00:01:01.821 Message: lib/reorder: Defining dependency "reorder" 00:01:01.821 Message: lib/security: Defining dependency "security" 00:01:01.821 Has header "linux/userfaultfd.h" : YES 00:01:01.821 Has header "linux/vduse.h" : YES 00:01:01.821 Message: lib/vhost: Defining dependency "vhost" 00:01:01.821 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:01.821 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:01.821 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:01.821 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:01.821 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:01.821 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:01.821 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:01.821 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:01.821 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:01.821 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:01.821 Program doxygen found: YES (/usr/bin/doxygen) 00:01:01.821 Configuring doxy-api-html.conf using configuration 00:01:01.821 Configuring doxy-api-man.conf using configuration 00:01:01.821 Program mandb found: YES (/usr/bin/mandb) 00:01:01.821 Program sphinx-build found: NO 00:01:01.821 Configuring rte_build_config.h using configuration 00:01:01.821 Message: 00:01:01.821 ================= 00:01:01.821 Applications Enabled 00:01:01.821 ================= 00:01:01.821 00:01:01.821 apps: 00:01:01.821 00:01:01.821 00:01:01.821 Message: 00:01:01.821 ================= 00:01:01.821 Libraries Enabled 00:01:01.821 ================= 00:01:01.821 00:01:01.821 libs: 00:01:01.821 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:01.821 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:01.821 cryptodev, dmadev, power, reorder, security, vhost, 00:01:01.821 00:01:01.821 Message: 00:01:01.821 =============== 00:01:01.821 Drivers Enabled 00:01:01.821 =============== 00:01:01.821 00:01:01.821 common: 00:01:01.821 00:01:01.821 bus: 00:01:01.821 pci, vdev, 00:01:01.821 mempool: 00:01:01.821 ring, 00:01:01.821 dma: 00:01:01.821 00:01:01.821 net: 00:01:01.821 00:01:01.821 crypto: 00:01:01.821 00:01:01.821 compress: 00:01:01.821 00:01:01.821 vdpa: 00:01:01.821 00:01:01.821 00:01:01.821 Message: 00:01:01.821 ================= 00:01:01.821 Content Skipped 00:01:01.821 ================= 00:01:01.821 00:01:01.821 apps: 00:01:01.821 dumpcap: explicitly disabled via build config 00:01:01.821 graph: explicitly disabled via build config 00:01:01.821 pdump: explicitly disabled via build config 00:01:01.821 proc-info: explicitly disabled via build config 00:01:01.821 test-acl: explicitly disabled via build config 00:01:01.821 test-bbdev: explicitly disabled via build config 00:01:01.821 test-cmdline: explicitly disabled via build config 00:01:01.821 test-compress-perf: explicitly disabled via build config 00:01:01.821 test-crypto-perf: explicitly disabled via build config 00:01:01.821 test-dma-perf: explicitly disabled via build config 00:01:01.821 test-eventdev: explicitly disabled via build config 00:01:01.821 test-fib: explicitly disabled via build config 00:01:01.821 test-flow-perf: explicitly disabled via build config 00:01:01.821 test-gpudev: explicitly disabled via build config 00:01:01.821 test-mldev: explicitly disabled via build config 00:01:01.821 test-pipeline: explicitly disabled via build config 00:01:01.821 test-pmd: explicitly disabled via build config 00:01:01.821 test-regex: explicitly disabled via build config 00:01:01.821 test-sad: explicitly disabled via build config 00:01:01.821 test-security-perf: explicitly disabled via build config 00:01:01.821 00:01:01.821 libs: 00:01:01.821 argparse: explicitly disabled via build config 00:01:01.821 metrics: explicitly disabled via build config 00:01:01.821 acl: explicitly disabled via build config 00:01:01.821 bbdev: explicitly disabled via build config 00:01:01.821 bitratestats: explicitly disabled via build config 00:01:01.821 bpf: explicitly disabled via build config 00:01:01.821 cfgfile: explicitly disabled via build config 00:01:01.821 distributor: explicitly disabled via build config 00:01:01.821 efd: explicitly disabled via build config 00:01:01.821 eventdev: explicitly disabled via build config 00:01:01.821 dispatcher: explicitly disabled via build config 00:01:01.821 gpudev: explicitly disabled via build config 00:01:01.821 gro: explicitly disabled via build config 00:01:01.821 gso: explicitly disabled via build config 00:01:01.821 ip_frag: explicitly disabled via build config 00:01:01.821 jobstats: explicitly disabled via build config 00:01:01.821 latencystats: explicitly disabled via build config 00:01:01.821 lpm: explicitly disabled via build config 00:01:01.821 member: explicitly disabled via build config 00:01:01.821 pcapng: explicitly disabled via build config 00:01:01.821 rawdev: explicitly disabled via build config 00:01:01.821 regexdev: explicitly disabled via build config 00:01:01.821 mldev: explicitly disabled via build config 00:01:01.821 rib: explicitly disabled via build config 00:01:01.821 sched: explicitly disabled via build config 00:01:01.821 stack: explicitly disabled via build config 00:01:01.821 ipsec: explicitly disabled via build config 00:01:01.821 pdcp: explicitly disabled via build config 00:01:01.821 fib: explicitly disabled via build config 00:01:01.821 port: explicitly disabled via build config 00:01:01.821 pdump: explicitly disabled via build config 00:01:01.821 table: explicitly disabled via build config 00:01:01.821 pipeline: explicitly disabled via build config 00:01:01.821 graph: explicitly disabled via build config 00:01:01.821 node: explicitly disabled via build config 00:01:01.821 00:01:01.821 drivers: 00:01:01.821 common/cpt: not in enabled drivers build config 00:01:01.821 common/dpaax: not in enabled drivers build config 00:01:01.821 common/iavf: not in enabled drivers build config 00:01:01.821 common/idpf: not in enabled drivers build config 00:01:01.821 common/ionic: not in enabled drivers build config 00:01:01.821 common/mvep: not in enabled drivers build config 00:01:01.821 common/octeontx: not in enabled drivers build config 00:01:01.821 bus/auxiliary: not in enabled drivers build config 00:01:01.821 bus/cdx: not in enabled drivers build config 00:01:01.821 bus/dpaa: not in enabled drivers build config 00:01:01.821 bus/fslmc: not in enabled drivers build config 00:01:01.821 bus/ifpga: not in enabled drivers build config 00:01:01.821 bus/platform: not in enabled drivers build config 00:01:01.821 bus/uacce: not in enabled drivers build config 00:01:01.821 bus/vmbus: not in enabled drivers build config 00:01:01.821 common/cnxk: not in enabled drivers build config 00:01:01.821 common/mlx5: not in enabled drivers build config 00:01:01.821 common/nfp: not in enabled drivers build config 00:01:01.821 common/nitrox: not in enabled drivers build config 00:01:01.821 common/qat: not in enabled drivers build config 00:01:01.821 common/sfc_efx: not in enabled drivers build config 00:01:01.821 mempool/bucket: not in enabled drivers build config 00:01:01.821 mempool/cnxk: not in enabled drivers build config 00:01:01.822 mempool/dpaa: not in enabled drivers build config 00:01:01.822 mempool/dpaa2: not in enabled drivers build config 00:01:01.822 mempool/octeontx: not in enabled drivers build config 00:01:01.822 mempool/stack: not in enabled drivers build config 00:01:01.822 dma/cnxk: not in enabled drivers build config 00:01:01.822 dma/dpaa: not in enabled drivers build config 00:01:01.822 dma/dpaa2: not in enabled drivers build config 00:01:01.822 dma/hisilicon: not in enabled drivers build config 00:01:01.822 dma/idxd: not in enabled drivers build config 00:01:01.822 dma/ioat: not in enabled drivers build config 00:01:01.822 dma/skeleton: not in enabled drivers build config 00:01:01.822 net/af_packet: not in enabled drivers build config 00:01:01.822 net/af_xdp: not in enabled drivers build config 00:01:01.822 net/ark: not in enabled drivers build config 00:01:01.822 net/atlantic: not in enabled drivers build config 00:01:01.822 net/avp: not in enabled drivers build config 00:01:01.822 net/axgbe: not in enabled drivers build config 00:01:01.822 net/bnx2x: not in enabled drivers build config 00:01:01.822 net/bnxt: not in enabled drivers build config 00:01:01.822 net/bonding: not in enabled drivers build config 00:01:01.822 net/cnxk: not in enabled drivers build config 00:01:01.822 net/cpfl: not in enabled drivers build config 00:01:01.822 net/cxgbe: not in enabled drivers build config 00:01:01.822 net/dpaa: not in enabled drivers build config 00:01:01.822 net/dpaa2: not in enabled drivers build config 00:01:01.822 net/e1000: not in enabled drivers build config 00:01:01.822 net/ena: not in enabled drivers build config 00:01:01.822 net/enetc: not in enabled drivers build config 00:01:01.822 net/enetfec: not in enabled drivers build config 00:01:01.822 net/enic: not in enabled drivers build config 00:01:01.822 net/failsafe: not in enabled drivers build config 00:01:01.822 net/fm10k: not in enabled drivers build config 00:01:01.822 net/gve: not in enabled drivers build config 00:01:01.822 net/hinic: not in enabled drivers build config 00:01:01.822 net/hns3: not in enabled drivers build config 00:01:01.822 net/i40e: not in enabled drivers build config 00:01:01.822 net/iavf: not in enabled drivers build config 00:01:01.822 net/ice: not in enabled drivers build config 00:01:01.822 net/idpf: not in enabled drivers build config 00:01:01.822 net/igc: not in enabled drivers build config 00:01:01.822 net/ionic: not in enabled drivers build config 00:01:01.822 net/ipn3ke: not in enabled drivers build config 00:01:01.822 net/ixgbe: not in enabled drivers build config 00:01:01.822 net/mana: not in enabled drivers build config 00:01:01.822 net/memif: not in enabled drivers build config 00:01:01.822 net/mlx4: not in enabled drivers build config 00:01:01.822 net/mlx5: not in enabled drivers build config 00:01:01.822 net/mvneta: not in enabled drivers build config 00:01:01.822 net/mvpp2: not in enabled drivers build config 00:01:01.822 net/netvsc: not in enabled drivers build config 00:01:01.822 net/nfb: not in enabled drivers build config 00:01:01.822 net/nfp: not in enabled drivers build config 00:01:01.822 net/ngbe: not in enabled drivers build config 00:01:01.822 net/null: not in enabled drivers build config 00:01:01.822 net/octeontx: not in enabled drivers build config 00:01:01.822 net/octeon_ep: not in enabled drivers build config 00:01:01.822 net/pcap: not in enabled drivers build config 00:01:01.822 net/pfe: not in enabled drivers build config 00:01:01.822 net/qede: not in enabled drivers build config 00:01:01.822 net/ring: not in enabled drivers build config 00:01:01.822 net/sfc: not in enabled drivers build config 00:01:01.822 net/softnic: not in enabled drivers build config 00:01:01.822 net/tap: not in enabled drivers build config 00:01:01.822 net/thunderx: not in enabled drivers build config 00:01:01.822 net/txgbe: not in enabled drivers build config 00:01:01.822 net/vdev_netvsc: not in enabled drivers build config 00:01:01.822 net/vhost: not in enabled drivers build config 00:01:01.822 net/virtio: not in enabled drivers build config 00:01:01.822 net/vmxnet3: not in enabled drivers build config 00:01:01.822 raw/*: missing internal dependency, "rawdev" 00:01:01.822 crypto/armv8: not in enabled drivers build config 00:01:01.822 crypto/bcmfs: not in enabled drivers build config 00:01:01.822 crypto/caam_jr: not in enabled drivers build config 00:01:01.822 crypto/ccp: not in enabled drivers build config 00:01:01.822 crypto/cnxk: not in enabled drivers build config 00:01:01.822 crypto/dpaa_sec: not in enabled drivers build config 00:01:01.822 crypto/dpaa2_sec: not in enabled drivers build config 00:01:01.822 crypto/ipsec_mb: not in enabled drivers build config 00:01:01.822 crypto/mlx5: not in enabled drivers build config 00:01:01.822 crypto/mvsam: not in enabled drivers build config 00:01:01.822 crypto/nitrox: not in enabled drivers build config 00:01:01.822 crypto/null: not in enabled drivers build config 00:01:01.822 crypto/octeontx: not in enabled drivers build config 00:01:01.822 crypto/openssl: not in enabled drivers build config 00:01:01.822 crypto/scheduler: not in enabled drivers build config 00:01:01.822 crypto/uadk: not in enabled drivers build config 00:01:01.822 crypto/virtio: not in enabled drivers build config 00:01:01.822 compress/isal: not in enabled drivers build config 00:01:01.822 compress/mlx5: not in enabled drivers build config 00:01:01.822 compress/nitrox: not in enabled drivers build config 00:01:01.822 compress/octeontx: not in enabled drivers build config 00:01:01.822 compress/zlib: not in enabled drivers build config 00:01:01.822 regex/*: missing internal dependency, "regexdev" 00:01:01.822 ml/*: missing internal dependency, "mldev" 00:01:01.822 vdpa/ifc: not in enabled drivers build config 00:01:01.822 vdpa/mlx5: not in enabled drivers build config 00:01:01.822 vdpa/nfp: not in enabled drivers build config 00:01:01.822 vdpa/sfc: not in enabled drivers build config 00:01:01.822 event/*: missing internal dependency, "eventdev" 00:01:01.822 baseband/*: missing internal dependency, "bbdev" 00:01:01.822 gpu/*: missing internal dependency, "gpudev" 00:01:01.822 00:01:01.822 00:01:01.822 Build targets in project: 85 00:01:01.822 00:01:01.822 DPDK 24.03.0 00:01:01.822 00:01:01.822 User defined options 00:01:01.822 buildtype : debug 00:01:01.822 default_library : shared 00:01:01.822 libdir : lib 00:01:01.822 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:01.822 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:01.822 c_link_args : 00:01:01.822 cpu_instruction_set: native 00:01:01.822 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:01.822 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:01.822 enable_docs : false 00:01:01.822 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:01.822 enable_kmods : false 00:01:01.822 max_lcores : 128 00:01:01.822 tests : false 00:01:01.822 00:01:01.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:02.089 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:02.089 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:02.391 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:02.391 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:02.391 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:02.391 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:02.391 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:02.391 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:02.391 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:02.391 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:02.391 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:02.391 [11/268] Linking static target lib/librte_kvargs.a 00:01:02.391 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:02.391 [13/268] Linking static target lib/librte_log.a 00:01:02.391 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:02.391 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:02.391 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:02.972 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.972 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:02.972 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:02.972 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:03.233 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:03.233 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:03.233 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:03.233 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:03.233 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:03.233 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:03.233 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:03.233 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:03.233 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:03.233 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:03.233 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:03.233 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:03.233 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:03.233 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:03.233 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:03.233 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:03.233 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:03.233 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:03.233 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:03.233 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:03.233 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:03.233 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:03.233 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:03.233 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:03.233 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:03.233 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:03.233 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:03.233 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:03.233 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:03.233 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:03.233 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:03.233 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:03.233 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:03.233 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:03.233 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:03.233 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:03.233 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:03.233 [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:03.497 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:03.497 [60/268] Linking static target lib/librte_telemetry.a 00:01:03.497 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:03.497 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:03.497 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:03.497 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:03.497 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:03.497 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.758 [67/268] Linking target lib/librte_log.so.24.1 00:01:03.758 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:03.758 [69/268] Linking static target lib/librte_pci.a 00:01:03.758 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:04.023 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:04.023 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:04.023 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:04.023 [74/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:04.023 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:04.023 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:04.023 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:04.023 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:04.023 [79/268] Linking target lib/librte_kvargs.so.24.1 00:01:04.023 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:04.023 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:04.023 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:04.023 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:04.290 [84/268] Linking static target lib/librte_ring.a 00:01:04.290 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:04.290 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:04.290 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:04.290 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:04.290 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:04.290 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:04.290 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:04.290 [92/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:04.290 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:04.290 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:04.290 [95/268] Linking static target lib/librte_meter.a 00:01:04.290 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:04.290 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:04.290 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:04.290 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:04.290 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:04.290 [101/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:04.290 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.290 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:04.290 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:04.290 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:04.290 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:04.290 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:04.290 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:04.290 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:04.290 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:04.290 [111/268] Linking static target lib/librte_eal.a 00:01:04.290 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:04.290 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:04.290 [114/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:04.290 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:04.290 [116/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:04.290 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:04.551 [118/268] Linking static target lib/librte_mempool.a 00:01:04.551 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:04.551 [120/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.551 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:04.551 [122/268] Linking static target lib/librte_rcu.a 00:01:04.551 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:04.551 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:04.551 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:04.551 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:04.551 [127/268] Linking target lib/librte_telemetry.so.24.1 00:01:04.551 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:04.551 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:04.551 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:04.551 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:04.551 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:04.551 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:04.812 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.812 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.812 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:04.812 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:04.812 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:04.812 [139/268] Linking static target lib/librte_net.a 00:01:04.812 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:04.812 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:04.812 [142/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:05.074 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:05.074 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:05.074 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:05.074 [146/268] Linking static target lib/librte_cmdline.a 00:01:05.074 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:05.074 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:05.333 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:05.333 [150/268] Linking static target lib/librte_timer.a 00:01:05.333 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.333 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:05.333 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:05.333 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:05.333 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.333 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:05.333 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:05.333 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:05.334 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:05.334 [160/268] Linking static target lib/librte_dmadev.a 00:01:05.334 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:05.334 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:05.334 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:05.593 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:05.593 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:05.593 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.593 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:05.593 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:05.593 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:05.593 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:05.593 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:05.593 [172/268] Linking static target lib/librte_power.a 00:01:05.593 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:05.593 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.593 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:05.593 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:05.593 [177/268] Linking static target lib/librte_compressdev.a 00:01:05.852 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:05.852 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:05.852 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:05.852 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:05.852 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:05.852 [183/268] Linking static target lib/librte_hash.a 00:01:05.852 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:05.852 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:05.852 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:05.852 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.852 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:05.852 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:05.852 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:05.852 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:05.852 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:06.111 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:06.111 [194/268] Linking static target lib/librte_mbuf.a 00:01:06.111 [195/268] Linking static target lib/librte_reorder.a 00:01:06.111 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:06.111 [197/268] Linking static target lib/librte_security.a 00:01:06.111 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:06.111 [199/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.111 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:06.111 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:06.111 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:06.111 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.111 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:06.111 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:06.111 [206/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.111 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:06.111 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.111 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:06.111 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:06.111 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:06.369 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.369 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:06.369 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:06.369 [215/268] Linking static target lib/librte_ethdev.a 00:01:06.369 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.369 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.369 [218/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:06.369 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.369 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:06.369 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:06.369 [222/268] Linking static target drivers/librte_mempool_ring.a 00:01:06.369 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.627 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:06.627 [225/268] Linking static target lib/librte_cryptodev.a 00:01:06.627 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.563 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.939 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:10.316 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.575 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.575 [231/268] Linking target lib/librte_eal.so.24.1 00:01:10.575 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:10.833 [233/268] Linking target lib/librte_timer.so.24.1 00:01:10.833 [234/268] Linking target lib/librte_ring.so.24.1 00:01:10.833 [235/268] Linking target lib/librte_meter.so.24.1 00:01:10.833 [236/268] Linking target lib/librte_pci.so.24.1 00:01:10.833 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:10.833 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:10.833 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:10.833 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:10.833 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:10.833 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:10.833 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:10.833 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:10.833 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:10.833 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:11.091 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:11.091 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:11.091 [249/268] Linking target lib/librte_mbuf.so.24.1 00:01:11.092 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:11.092 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:11.092 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:11.092 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:11.092 [254/268] Linking target lib/librte_net.so.24.1 00:01:11.092 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:11.350 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:11.350 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:11.350 [258/268] Linking target lib/librte_security.so.24.1 00:01:11.350 [259/268] Linking target lib/librte_hash.so.24.1 00:01:11.350 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:11.350 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:11.609 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:11.609 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:11.609 [264/268] Linking target lib/librte_power.so.24.1 00:01:14.140 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:14.140 [266/268] Linking static target lib/librte_vhost.a 00:01:15.074 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.332 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:15.332 INFO: autodetecting backend as ninja 00:01:15.332 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:16.268 CC lib/ut/ut.o 00:01:16.268 CC lib/log/log.o 00:01:16.268 CC lib/log/log_flags.o 00:01:16.268 CC lib/log/log_deprecated.o 00:01:16.268 CC lib/ut_mock/mock.o 00:01:16.268 LIB libspdk_log.a 00:01:16.268 LIB libspdk_ut.a 00:01:16.268 LIB libspdk_ut_mock.a 00:01:16.268 SO libspdk_ut.so.2.0 00:01:16.268 SO libspdk_log.so.7.0 00:01:16.268 SO libspdk_ut_mock.so.6.0 00:01:16.526 SYMLINK libspdk_ut.so 00:01:16.526 SYMLINK libspdk_ut_mock.so 00:01:16.526 SYMLINK libspdk_log.so 00:01:16.526 CC lib/dma/dma.o 00:01:16.526 CXX lib/trace_parser/trace.o 00:01:16.526 CC lib/ioat/ioat.o 00:01:16.526 CC lib/util/base64.o 00:01:16.526 CC lib/util/bit_array.o 00:01:16.526 CC lib/util/cpuset.o 00:01:16.526 CC lib/util/crc16.o 00:01:16.526 CC lib/util/crc32.o 00:01:16.526 CC lib/util/crc32c.o 00:01:16.526 CC lib/util/crc32_ieee.o 00:01:16.526 CC lib/util/crc64.o 00:01:16.526 CC lib/util/dif.o 00:01:16.526 CC lib/util/fd.o 00:01:16.526 CC lib/util/file.o 00:01:16.526 CC lib/util/hexlify.o 00:01:16.526 CC lib/util/iov.o 00:01:16.526 CC lib/util/math.o 00:01:16.526 CC lib/util/pipe.o 00:01:16.526 CC lib/util/strerror_tls.o 00:01:16.526 CC lib/util/string.o 00:01:16.526 CC lib/util/uuid.o 00:01:16.526 CC lib/util/fd_group.o 00:01:16.526 CC lib/util/zipf.o 00:01:16.526 CC lib/util/xor.o 00:01:16.783 CC lib/vfio_user/host/vfio_user_pci.o 00:01:16.783 CC lib/vfio_user/host/vfio_user.o 00:01:16.783 LIB libspdk_dma.a 00:01:16.783 SO libspdk_dma.so.4.0 00:01:16.783 SYMLINK libspdk_dma.so 00:01:16.783 LIB libspdk_ioat.a 00:01:17.040 SO libspdk_ioat.so.7.0 00:01:17.040 LIB libspdk_vfio_user.a 00:01:17.040 SYMLINK libspdk_ioat.so 00:01:17.040 SO libspdk_vfio_user.so.5.0 00:01:17.040 SYMLINK libspdk_vfio_user.so 00:01:17.040 LIB libspdk_util.a 00:01:17.297 SO libspdk_util.so.9.1 00:01:17.297 SYMLINK libspdk_util.so 00:01:17.554 CC lib/json/json_parse.o 00:01:17.554 CC lib/rdma_provider/common.o 00:01:17.554 CC lib/conf/conf.o 00:01:17.554 CC lib/rdma_utils/rdma_utils.o 00:01:17.554 CC lib/env_dpdk/env.o 00:01:17.554 CC lib/idxd/idxd.o 00:01:17.554 CC lib/json/json_util.o 00:01:17.554 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:17.554 CC lib/vmd/vmd.o 00:01:17.554 CC lib/idxd/idxd_user.o 00:01:17.554 CC lib/json/json_write.o 00:01:17.554 CC lib/env_dpdk/memory.o 00:01:17.554 CC lib/vmd/led.o 00:01:17.554 CC lib/idxd/idxd_kernel.o 00:01:17.554 CC lib/env_dpdk/pci.o 00:01:17.554 CC lib/env_dpdk/init.o 00:01:17.554 CC lib/env_dpdk/threads.o 00:01:17.554 CC lib/env_dpdk/pci_ioat.o 00:01:17.554 CC lib/env_dpdk/pci_virtio.o 00:01:17.554 CC lib/env_dpdk/pci_vmd.o 00:01:17.554 CC lib/env_dpdk/pci_idxd.o 00:01:17.554 CC lib/env_dpdk/pci_event.o 00:01:17.554 CC lib/env_dpdk/sigbus_handler.o 00:01:17.554 CC lib/env_dpdk/pci_dpdk.o 00:01:17.554 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:17.554 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:17.554 LIB libspdk_trace_parser.a 00:01:17.554 SO libspdk_trace_parser.so.5.0 00:01:17.812 SYMLINK libspdk_trace_parser.so 00:01:17.812 LIB libspdk_conf.a 00:01:17.812 LIB libspdk_rdma_utils.a 00:01:17.812 LIB libspdk_rdma_provider.a 00:01:17.812 SO libspdk_conf.so.6.0 00:01:17.812 SO libspdk_rdma_utils.so.1.0 00:01:17.812 SO libspdk_rdma_provider.so.6.0 00:01:17.812 SYMLINK libspdk_conf.so 00:01:17.812 SYMLINK libspdk_rdma_utils.so 00:01:17.812 SYMLINK libspdk_rdma_provider.so 00:01:17.812 LIB libspdk_json.a 00:01:18.069 SO libspdk_json.so.6.0 00:01:18.069 SYMLINK libspdk_json.so 00:01:18.069 LIB libspdk_idxd.a 00:01:18.069 SO libspdk_idxd.so.12.0 00:01:18.069 CC lib/jsonrpc/jsonrpc_server.o 00:01:18.069 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:18.069 CC lib/jsonrpc/jsonrpc_client.o 00:01:18.069 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:18.069 SYMLINK libspdk_idxd.so 00:01:18.327 LIB libspdk_vmd.a 00:01:18.327 SO libspdk_vmd.so.6.0 00:01:18.327 SYMLINK libspdk_vmd.so 00:01:18.327 LIB libspdk_jsonrpc.a 00:01:18.585 SO libspdk_jsonrpc.so.6.0 00:01:18.585 SYMLINK libspdk_jsonrpc.so 00:01:18.585 CC lib/rpc/rpc.o 00:01:18.843 LIB libspdk_rpc.a 00:01:18.843 SO libspdk_rpc.so.6.0 00:01:19.102 SYMLINK libspdk_rpc.so 00:01:19.102 CC lib/notify/notify.o 00:01:19.102 CC lib/notify/notify_rpc.o 00:01:19.102 CC lib/keyring/keyring.o 00:01:19.102 CC lib/keyring/keyring_rpc.o 00:01:19.102 CC lib/trace/trace.o 00:01:19.102 CC lib/trace/trace_flags.o 00:01:19.102 CC lib/trace/trace_rpc.o 00:01:19.360 LIB libspdk_notify.a 00:01:19.360 SO libspdk_notify.so.6.0 00:01:19.360 LIB libspdk_keyring.a 00:01:19.360 SYMLINK libspdk_notify.so 00:01:19.360 LIB libspdk_trace.a 00:01:19.360 SO libspdk_keyring.so.1.0 00:01:19.360 SO libspdk_trace.so.10.0 00:01:19.619 SYMLINK libspdk_keyring.so 00:01:19.619 SYMLINK libspdk_trace.so 00:01:19.619 LIB libspdk_env_dpdk.a 00:01:19.619 SO libspdk_env_dpdk.so.14.1 00:01:19.619 CC lib/thread/thread.o 00:01:19.619 CC lib/thread/iobuf.o 00:01:19.619 CC lib/sock/sock.o 00:01:19.619 CC lib/sock/sock_rpc.o 00:01:19.877 SYMLINK libspdk_env_dpdk.so 00:01:20.136 LIB libspdk_sock.a 00:01:20.136 SO libspdk_sock.so.10.0 00:01:20.136 SYMLINK libspdk_sock.so 00:01:20.395 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:20.395 CC lib/nvme/nvme_ctrlr.o 00:01:20.395 CC lib/nvme/nvme_fabric.o 00:01:20.395 CC lib/nvme/nvme_ns_cmd.o 00:01:20.395 CC lib/nvme/nvme_ns.o 00:01:20.395 CC lib/nvme/nvme_pcie_common.o 00:01:20.395 CC lib/nvme/nvme_pcie.o 00:01:20.395 CC lib/nvme/nvme_qpair.o 00:01:20.395 CC lib/nvme/nvme.o 00:01:20.395 CC lib/nvme/nvme_quirks.o 00:01:20.395 CC lib/nvme/nvme_transport.o 00:01:20.395 CC lib/nvme/nvme_discovery.o 00:01:20.395 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:20.395 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:20.395 CC lib/nvme/nvme_tcp.o 00:01:20.395 CC lib/nvme/nvme_opal.o 00:01:20.395 CC lib/nvme/nvme_io_msg.o 00:01:20.395 CC lib/nvme/nvme_poll_group.o 00:01:20.395 CC lib/nvme/nvme_zns.o 00:01:20.395 CC lib/nvme/nvme_stubs.o 00:01:20.395 CC lib/nvme/nvme_auth.o 00:01:20.395 CC lib/nvme/nvme_cuse.o 00:01:20.395 CC lib/nvme/nvme_vfio_user.o 00:01:20.395 CC lib/nvme/nvme_rdma.o 00:01:21.330 LIB libspdk_thread.a 00:01:21.330 SO libspdk_thread.so.10.1 00:01:21.330 SYMLINK libspdk_thread.so 00:01:21.587 CC lib/blob/blobstore.o 00:01:21.587 CC lib/vfu_tgt/tgt_endpoint.o 00:01:21.587 CC lib/init/json_config.o 00:01:21.587 CC lib/virtio/virtio.o 00:01:21.587 CC lib/accel/accel.o 00:01:21.587 CC lib/virtio/virtio_vhost_user.o 00:01:21.587 CC lib/blob/request.o 00:01:21.587 CC lib/init/subsystem.o 00:01:21.587 CC lib/vfu_tgt/tgt_rpc.o 00:01:21.587 CC lib/accel/accel_rpc.o 00:01:21.587 CC lib/blob/zeroes.o 00:01:21.587 CC lib/init/subsystem_rpc.o 00:01:21.587 CC lib/virtio/virtio_vfio_user.o 00:01:21.587 CC lib/virtio/virtio_pci.o 00:01:21.587 CC lib/init/rpc.o 00:01:21.587 CC lib/accel/accel_sw.o 00:01:21.587 CC lib/blob/blob_bs_dev.o 00:01:21.845 LIB libspdk_init.a 00:01:21.845 SO libspdk_init.so.5.0 00:01:21.845 LIB libspdk_virtio.a 00:01:21.845 LIB libspdk_vfu_tgt.a 00:01:21.845 SYMLINK libspdk_init.so 00:01:21.845 SO libspdk_vfu_tgt.so.3.0 00:01:21.845 SO libspdk_virtio.so.7.0 00:01:22.103 SYMLINK libspdk_vfu_tgt.so 00:01:22.103 SYMLINK libspdk_virtio.so 00:01:22.103 CC lib/event/app.o 00:01:22.103 CC lib/event/reactor.o 00:01:22.103 CC lib/event/log_rpc.o 00:01:22.103 CC lib/event/app_rpc.o 00:01:22.103 CC lib/event/scheduler_static.o 00:01:22.361 LIB libspdk_event.a 00:01:22.618 SO libspdk_event.so.14.0 00:01:22.618 LIB libspdk_accel.a 00:01:22.618 SYMLINK libspdk_event.so 00:01:22.618 SO libspdk_accel.so.15.1 00:01:22.618 SYMLINK libspdk_accel.so 00:01:22.874 CC lib/bdev/bdev.o 00:01:22.874 CC lib/bdev/bdev_rpc.o 00:01:22.874 CC lib/bdev/bdev_zone.o 00:01:22.874 CC lib/bdev/part.o 00:01:22.874 CC lib/bdev/scsi_nvme.o 00:01:22.874 LIB libspdk_nvme.a 00:01:23.130 SO libspdk_nvme.so.13.1 00:01:23.387 SYMLINK libspdk_nvme.so 00:01:24.760 LIB libspdk_blob.a 00:01:24.760 SO libspdk_blob.so.11.0 00:01:24.760 SYMLINK libspdk_blob.so 00:01:24.760 CC lib/blobfs/blobfs.o 00:01:24.760 CC lib/blobfs/tree.o 00:01:24.760 CC lib/lvol/lvol.o 00:01:25.693 LIB libspdk_blobfs.a 00:01:25.693 SO libspdk_blobfs.so.10.0 00:01:25.693 SYMLINK libspdk_blobfs.so 00:01:25.693 LIB libspdk_lvol.a 00:01:25.693 SO libspdk_lvol.so.10.0 00:01:25.693 LIB libspdk_bdev.a 00:01:25.693 SYMLINK libspdk_lvol.so 00:01:25.693 SO libspdk_bdev.so.15.1 00:01:25.960 SYMLINK libspdk_bdev.so 00:01:25.960 CC lib/ublk/ublk.o 00:01:25.960 CC lib/nvmf/ctrlr.o 00:01:25.960 CC lib/ublk/ublk_rpc.o 00:01:25.960 CC lib/ftl/ftl_core.o 00:01:25.960 CC lib/nvmf/ctrlr_discovery.o 00:01:25.960 CC lib/ftl/ftl_init.o 00:01:25.960 CC lib/nvmf/ctrlr_bdev.o 00:01:25.960 CC lib/nvmf/subsystem.o 00:01:25.960 CC lib/ftl/ftl_layout.o 00:01:25.960 CC lib/nbd/nbd.o 00:01:25.960 CC lib/nvmf/nvmf.o 00:01:25.960 CC lib/ftl/ftl_debug.o 00:01:25.960 CC lib/nbd/nbd_rpc.o 00:01:25.960 CC lib/scsi/dev.o 00:01:25.960 CC lib/nvmf/nvmf_rpc.o 00:01:25.960 CC lib/ftl/ftl_io.o 00:01:25.960 CC lib/nvmf/transport.o 00:01:25.960 CC lib/scsi/lun.o 00:01:25.960 CC lib/ftl/ftl_sb.o 00:01:25.960 CC lib/scsi/port.o 00:01:25.960 CC lib/ftl/ftl_l2p.o 00:01:25.960 CC lib/nvmf/tcp.o 00:01:25.960 CC lib/ftl/ftl_l2p_flat.o 00:01:25.960 CC lib/nvmf/stubs.o 00:01:25.960 CC lib/scsi/scsi.o 00:01:25.960 CC lib/scsi/scsi_bdev.o 00:01:25.960 CC lib/nvmf/mdns_server.o 00:01:25.960 CC lib/ftl/ftl_nv_cache.o 00:01:25.960 CC lib/nvmf/vfio_user.o 00:01:25.960 CC lib/scsi/scsi_pr.o 00:01:25.960 CC lib/ftl/ftl_band_ops.o 00:01:25.960 CC lib/ftl/ftl_band.o 00:01:25.960 CC lib/scsi/scsi_rpc.o 00:01:25.960 CC lib/nvmf/rdma.o 00:01:25.960 CC lib/nvmf/auth.o 00:01:25.960 CC lib/scsi/task.o 00:01:25.960 CC lib/ftl/ftl_writer.o 00:01:25.960 CC lib/ftl/ftl_rq.o 00:01:25.960 CC lib/ftl/ftl_reloc.o 00:01:25.960 CC lib/ftl/ftl_l2p_cache.o 00:01:25.960 CC lib/ftl/ftl_p2l.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:25.960 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:26.531 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:26.531 CC lib/ftl/utils/ftl_conf.o 00:01:26.531 CC lib/ftl/utils/ftl_md.o 00:01:26.531 CC lib/ftl/utils/ftl_mempool.o 00:01:26.531 CC lib/ftl/utils/ftl_bitmap.o 00:01:26.531 CC lib/ftl/utils/ftl_property.o 00:01:26.531 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:26.531 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:26.531 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:26.531 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:26.531 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:26.789 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:26.789 CC lib/ftl/base/ftl_base_dev.o 00:01:26.789 CC lib/ftl/base/ftl_base_bdev.o 00:01:26.789 CC lib/ftl/ftl_trace.o 00:01:26.789 LIB libspdk_nbd.a 00:01:26.789 SO libspdk_nbd.so.7.0 00:01:27.047 SYMLINK libspdk_nbd.so 00:01:27.047 LIB libspdk_scsi.a 00:01:27.047 SO libspdk_scsi.so.9.0 00:01:27.047 SYMLINK libspdk_scsi.so 00:01:27.047 LIB libspdk_ublk.a 00:01:27.304 SO libspdk_ublk.so.3.0 00:01:27.304 SYMLINK libspdk_ublk.so 00:01:27.304 CC lib/vhost/vhost.o 00:01:27.304 CC lib/vhost/vhost_rpc.o 00:01:27.304 CC lib/iscsi/conn.o 00:01:27.304 CC lib/vhost/vhost_scsi.o 00:01:27.304 CC lib/iscsi/init_grp.o 00:01:27.304 CC lib/vhost/vhost_blk.o 00:01:27.304 CC lib/iscsi/iscsi.o 00:01:27.304 CC lib/vhost/rte_vhost_user.o 00:01:27.304 CC lib/iscsi/md5.o 00:01:27.304 CC lib/iscsi/param.o 00:01:27.304 CC lib/iscsi/portal_grp.o 00:01:27.304 CC lib/iscsi/tgt_node.o 00:01:27.305 CC lib/iscsi/iscsi_subsystem.o 00:01:27.305 CC lib/iscsi/iscsi_rpc.o 00:01:27.305 CC lib/iscsi/task.o 00:01:27.562 LIB libspdk_ftl.a 00:01:27.820 SO libspdk_ftl.so.9.0 00:01:28.078 SYMLINK libspdk_ftl.so 00:01:28.641 LIB libspdk_vhost.a 00:01:28.641 SO libspdk_vhost.so.8.0 00:01:28.641 LIB libspdk_nvmf.a 00:01:28.641 SYMLINK libspdk_vhost.so 00:01:28.642 SO libspdk_nvmf.so.18.1 00:01:28.642 LIB libspdk_iscsi.a 00:01:28.642 SO libspdk_iscsi.so.8.0 00:01:28.900 SYMLINK libspdk_nvmf.so 00:01:28.900 SYMLINK libspdk_iscsi.so 00:01:29.158 CC module/vfu_device/vfu_virtio.o 00:01:29.158 CC module/env_dpdk/env_dpdk_rpc.o 00:01:29.158 CC module/vfu_device/vfu_virtio_blk.o 00:01:29.158 CC module/vfu_device/vfu_virtio_scsi.o 00:01:29.158 CC module/vfu_device/vfu_virtio_rpc.o 00:01:29.158 CC module/accel/error/accel_error.o 00:01:29.158 CC module/accel/dsa/accel_dsa.o 00:01:29.158 CC module/accel/error/accel_error_rpc.o 00:01:29.158 CC module/accel/ioat/accel_ioat.o 00:01:29.158 CC module/scheduler/gscheduler/gscheduler.o 00:01:29.158 CC module/accel/dsa/accel_dsa_rpc.o 00:01:29.158 CC module/accel/iaa/accel_iaa.o 00:01:29.158 CC module/accel/ioat/accel_ioat_rpc.o 00:01:29.158 CC module/sock/posix/posix.o 00:01:29.158 CC module/accel/iaa/accel_iaa_rpc.o 00:01:29.158 CC module/keyring/linux/keyring.o 00:01:29.158 CC module/keyring/file/keyring.o 00:01:29.158 CC module/blob/bdev/blob_bdev.o 00:01:29.158 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:29.158 CC module/keyring/file/keyring_rpc.o 00:01:29.158 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:29.158 CC module/keyring/linux/keyring_rpc.o 00:01:29.417 LIB libspdk_env_dpdk_rpc.a 00:01:29.417 SO libspdk_env_dpdk_rpc.so.6.0 00:01:29.417 SYMLINK libspdk_env_dpdk_rpc.so 00:01:29.417 LIB libspdk_keyring_linux.a 00:01:29.417 LIB libspdk_keyring_file.a 00:01:29.418 LIB libspdk_scheduler_gscheduler.a 00:01:29.418 LIB libspdk_scheduler_dpdk_governor.a 00:01:29.418 SO libspdk_keyring_linux.so.1.0 00:01:29.418 SO libspdk_keyring_file.so.1.0 00:01:29.418 SO libspdk_scheduler_gscheduler.so.4.0 00:01:29.418 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:29.418 LIB libspdk_accel_error.a 00:01:29.418 LIB libspdk_accel_ioat.a 00:01:29.418 LIB libspdk_scheduler_dynamic.a 00:01:29.418 LIB libspdk_accel_iaa.a 00:01:29.418 SO libspdk_accel_error.so.2.0 00:01:29.418 SO libspdk_accel_ioat.so.6.0 00:01:29.418 SYMLINK libspdk_keyring_linux.so 00:01:29.418 SYMLINK libspdk_keyring_file.so 00:01:29.418 SO libspdk_scheduler_dynamic.so.4.0 00:01:29.418 SO libspdk_accel_iaa.so.3.0 00:01:29.418 SYMLINK libspdk_scheduler_gscheduler.so 00:01:29.418 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:29.675 LIB libspdk_accel_dsa.a 00:01:29.675 SYMLINK libspdk_accel_error.so 00:01:29.675 SYMLINK libspdk_accel_ioat.so 00:01:29.675 LIB libspdk_blob_bdev.a 00:01:29.675 SYMLINK libspdk_scheduler_dynamic.so 00:01:29.675 SYMLINK libspdk_accel_iaa.so 00:01:29.675 SO libspdk_accel_dsa.so.5.0 00:01:29.675 SO libspdk_blob_bdev.so.11.0 00:01:29.675 SYMLINK libspdk_accel_dsa.so 00:01:29.675 SYMLINK libspdk_blob_bdev.so 00:01:29.946 LIB libspdk_vfu_device.a 00:01:29.946 SO libspdk_vfu_device.so.3.0 00:01:29.946 CC module/bdev/lvol/vbdev_lvol.o 00:01:29.946 CC module/bdev/delay/vbdev_delay.o 00:01:29.946 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:29.946 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:29.946 CC module/bdev/error/vbdev_error.o 00:01:29.946 CC module/blobfs/bdev/blobfs_bdev.o 00:01:29.946 CC module/bdev/malloc/bdev_malloc.o 00:01:29.946 CC module/bdev/error/vbdev_error_rpc.o 00:01:29.946 CC module/bdev/gpt/gpt.o 00:01:29.946 CC module/bdev/null/bdev_null.o 00:01:29.946 CC module/bdev/gpt/vbdev_gpt.o 00:01:29.946 CC module/bdev/nvme/bdev_nvme.o 00:01:29.946 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:29.946 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:29.946 CC module/bdev/split/vbdev_split.o 00:01:29.946 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:29.946 CC module/bdev/nvme/nvme_rpc.o 00:01:29.946 CC module/bdev/null/bdev_null_rpc.o 00:01:29.946 CC module/bdev/passthru/vbdev_passthru.o 00:01:29.946 CC module/bdev/aio/bdev_aio.o 00:01:29.946 CC module/bdev/split/vbdev_split_rpc.o 00:01:29.946 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:29.946 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:29.946 CC module/bdev/aio/bdev_aio_rpc.o 00:01:29.946 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:29.946 CC module/bdev/nvme/bdev_mdns_client.o 00:01:29.946 CC module/bdev/raid/bdev_raid.o 00:01:29.946 CC module/bdev/iscsi/bdev_iscsi.o 00:01:29.946 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:29.946 CC module/bdev/nvme/vbdev_opal.o 00:01:29.946 CC module/bdev/raid/bdev_raid_rpc.o 00:01:29.946 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:29.946 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:29.946 CC module/bdev/raid/bdev_raid_sb.o 00:01:29.946 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:29.946 CC module/bdev/ftl/bdev_ftl.o 00:01:29.946 CC module/bdev/raid/raid0.o 00:01:29.946 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:29.946 CC module/bdev/raid/raid1.o 00:01:29.946 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:29.946 CC module/bdev/raid/concat.o 00:01:29.946 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:29.946 SYMLINK libspdk_vfu_device.so 00:01:30.244 LIB libspdk_sock_posix.a 00:01:30.244 LIB libspdk_bdev_split.a 00:01:30.244 SO libspdk_sock_posix.so.6.0 00:01:30.244 SO libspdk_bdev_split.so.6.0 00:01:30.245 LIB libspdk_blobfs_bdev.a 00:01:30.245 SO libspdk_blobfs_bdev.so.6.0 00:01:30.245 SYMLINK libspdk_bdev_split.so 00:01:30.245 SYMLINK libspdk_sock_posix.so 00:01:30.245 LIB libspdk_bdev_error.a 00:01:30.245 LIB libspdk_bdev_gpt.a 00:01:30.503 SYMLINK libspdk_blobfs_bdev.so 00:01:30.503 LIB libspdk_bdev_malloc.a 00:01:30.503 SO libspdk_bdev_error.so.6.0 00:01:30.503 LIB libspdk_bdev_ftl.a 00:01:30.503 SO libspdk_bdev_gpt.so.6.0 00:01:30.503 SO libspdk_bdev_malloc.so.6.0 00:01:30.503 LIB libspdk_bdev_null.a 00:01:30.503 LIB libspdk_bdev_aio.a 00:01:30.503 SO libspdk_bdev_ftl.so.6.0 00:01:30.503 SO libspdk_bdev_null.so.6.0 00:01:30.503 SO libspdk_bdev_aio.so.6.0 00:01:30.503 LIB libspdk_bdev_zone_block.a 00:01:30.503 SYMLINK libspdk_bdev_error.so 00:01:30.503 SYMLINK libspdk_bdev_gpt.so 00:01:30.503 SYMLINK libspdk_bdev_malloc.so 00:01:30.503 LIB libspdk_bdev_passthru.a 00:01:30.503 SO libspdk_bdev_zone_block.so.6.0 00:01:30.503 SYMLINK libspdk_bdev_ftl.so 00:01:30.503 SO libspdk_bdev_passthru.so.6.0 00:01:30.503 SYMLINK libspdk_bdev_null.so 00:01:30.503 SYMLINK libspdk_bdev_aio.so 00:01:30.503 LIB libspdk_bdev_delay.a 00:01:30.503 LIB libspdk_bdev_iscsi.a 00:01:30.503 SYMLINK libspdk_bdev_zone_block.so 00:01:30.503 SO libspdk_bdev_delay.so.6.0 00:01:30.503 SO libspdk_bdev_iscsi.so.6.0 00:01:30.503 SYMLINK libspdk_bdev_passthru.so 00:01:30.503 SYMLINK libspdk_bdev_iscsi.so 00:01:30.503 SYMLINK libspdk_bdev_delay.so 00:01:30.762 LIB libspdk_bdev_lvol.a 00:01:30.762 SO libspdk_bdev_lvol.so.6.0 00:01:30.762 LIB libspdk_bdev_virtio.a 00:01:30.762 SO libspdk_bdev_virtio.so.6.0 00:01:30.762 SYMLINK libspdk_bdev_lvol.so 00:01:30.762 SYMLINK libspdk_bdev_virtio.so 00:01:31.020 LIB libspdk_bdev_raid.a 00:01:31.020 SO libspdk_bdev_raid.so.6.0 00:01:31.279 SYMLINK libspdk_bdev_raid.so 00:01:32.214 LIB libspdk_bdev_nvme.a 00:01:32.214 SO libspdk_bdev_nvme.so.7.0 00:01:32.473 SYMLINK libspdk_bdev_nvme.so 00:01:32.731 CC module/event/subsystems/iobuf/iobuf.o 00:01:32.731 CC module/event/subsystems/scheduler/scheduler.o 00:01:32.731 CC module/event/subsystems/sock/sock.o 00:01:32.731 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:32.731 CC module/event/subsystems/keyring/keyring.o 00:01:32.731 CC module/event/subsystems/vmd/vmd.o 00:01:32.731 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:32.731 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:32.731 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:32.992 LIB libspdk_event_keyring.a 00:01:32.992 LIB libspdk_event_vhost_blk.a 00:01:32.992 LIB libspdk_event_scheduler.a 00:01:32.992 LIB libspdk_event_vmd.a 00:01:32.992 LIB libspdk_event_sock.a 00:01:32.992 SO libspdk_event_keyring.so.1.0 00:01:32.992 LIB libspdk_event_iobuf.a 00:01:32.992 SO libspdk_event_vhost_blk.so.3.0 00:01:32.992 SO libspdk_event_scheduler.so.4.0 00:01:32.992 SO libspdk_event_sock.so.5.0 00:01:32.992 SO libspdk_event_vmd.so.6.0 00:01:32.992 LIB libspdk_event_vfu_tgt.a 00:01:32.992 SO libspdk_event_iobuf.so.3.0 00:01:32.992 SO libspdk_event_vfu_tgt.so.3.0 00:01:32.992 SYMLINK libspdk_event_keyring.so 00:01:32.992 SYMLINK libspdk_event_vhost_blk.so 00:01:32.992 SYMLINK libspdk_event_scheduler.so 00:01:32.992 SYMLINK libspdk_event_sock.so 00:01:32.992 SYMLINK libspdk_event_vmd.so 00:01:32.992 SYMLINK libspdk_event_iobuf.so 00:01:32.992 SYMLINK libspdk_event_vfu_tgt.so 00:01:33.253 CC module/event/subsystems/accel/accel.o 00:01:33.253 LIB libspdk_event_accel.a 00:01:33.514 SO libspdk_event_accel.so.6.0 00:01:33.514 SYMLINK libspdk_event_accel.so 00:01:33.514 CC module/event/subsystems/bdev/bdev.o 00:01:33.773 LIB libspdk_event_bdev.a 00:01:33.773 SO libspdk_event_bdev.so.6.0 00:01:33.773 SYMLINK libspdk_event_bdev.so 00:01:34.030 CC module/event/subsystems/ublk/ublk.o 00:01:34.030 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:34.030 CC module/event/subsystems/scsi/scsi.o 00:01:34.031 CC module/event/subsystems/nbd/nbd.o 00:01:34.031 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:34.031 LIB libspdk_event_ublk.a 00:01:34.031 LIB libspdk_event_nbd.a 00:01:34.288 LIB libspdk_event_scsi.a 00:01:34.288 SO libspdk_event_ublk.so.3.0 00:01:34.288 SO libspdk_event_nbd.so.6.0 00:01:34.288 SO libspdk_event_scsi.so.6.0 00:01:34.288 SYMLINK libspdk_event_ublk.so 00:01:34.288 SYMLINK libspdk_event_nbd.so 00:01:34.288 SYMLINK libspdk_event_scsi.so 00:01:34.288 LIB libspdk_event_nvmf.a 00:01:34.288 SO libspdk_event_nvmf.so.6.0 00:01:34.288 SYMLINK libspdk_event_nvmf.so 00:01:34.288 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:34.545 CC module/event/subsystems/iscsi/iscsi.o 00:01:34.545 LIB libspdk_event_vhost_scsi.a 00:01:34.545 LIB libspdk_event_iscsi.a 00:01:34.545 SO libspdk_event_vhost_scsi.so.3.0 00:01:34.545 SO libspdk_event_iscsi.so.6.0 00:01:34.545 SYMLINK libspdk_event_vhost_scsi.so 00:01:34.545 SYMLINK libspdk_event_iscsi.so 00:01:34.802 SO libspdk.so.6.0 00:01:34.802 SYMLINK libspdk.so 00:01:35.066 CXX app/trace/trace.o 00:01:35.066 CC test/rpc_client/rpc_client_test.o 00:01:35.066 CC app/spdk_nvme_perf/perf.o 00:01:35.066 CC app/spdk_lspci/spdk_lspci.o 00:01:35.066 CC app/spdk_nvme_discover/discovery_aer.o 00:01:35.066 TEST_HEADER include/spdk/accel.h 00:01:35.066 CC app/spdk_top/spdk_top.o 00:01:35.066 TEST_HEADER include/spdk/accel_module.h 00:01:35.066 CC app/trace_record/trace_record.o 00:01:35.066 TEST_HEADER include/spdk/assert.h 00:01:35.066 TEST_HEADER include/spdk/barrier.h 00:01:35.066 TEST_HEADER include/spdk/base64.h 00:01:35.066 TEST_HEADER include/spdk/bdev.h 00:01:35.066 TEST_HEADER include/spdk/bdev_module.h 00:01:35.066 CC app/spdk_nvme_identify/identify.o 00:01:35.066 TEST_HEADER include/spdk/bdev_zone.h 00:01:35.066 TEST_HEADER include/spdk/bit_array.h 00:01:35.066 TEST_HEADER include/spdk/bit_pool.h 00:01:35.066 TEST_HEADER include/spdk/blob_bdev.h 00:01:35.066 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:35.066 TEST_HEADER include/spdk/blobfs.h 00:01:35.066 TEST_HEADER include/spdk/blob.h 00:01:35.066 TEST_HEADER include/spdk/conf.h 00:01:35.066 TEST_HEADER include/spdk/config.h 00:01:35.066 TEST_HEADER include/spdk/cpuset.h 00:01:35.066 TEST_HEADER include/spdk/crc16.h 00:01:35.066 TEST_HEADER include/spdk/crc32.h 00:01:35.066 TEST_HEADER include/spdk/crc64.h 00:01:35.066 TEST_HEADER include/spdk/dif.h 00:01:35.066 TEST_HEADER include/spdk/dma.h 00:01:35.066 TEST_HEADER include/spdk/endian.h 00:01:35.066 TEST_HEADER include/spdk/env_dpdk.h 00:01:35.066 TEST_HEADER include/spdk/env.h 00:01:35.066 TEST_HEADER include/spdk/event.h 00:01:35.066 TEST_HEADER include/spdk/fd_group.h 00:01:35.066 TEST_HEADER include/spdk/fd.h 00:01:35.066 TEST_HEADER include/spdk/file.h 00:01:35.066 TEST_HEADER include/spdk/ftl.h 00:01:35.066 TEST_HEADER include/spdk/gpt_spec.h 00:01:35.066 TEST_HEADER include/spdk/hexlify.h 00:01:35.066 TEST_HEADER include/spdk/histogram_data.h 00:01:35.066 TEST_HEADER include/spdk/idxd.h 00:01:35.066 TEST_HEADER include/spdk/idxd_spec.h 00:01:35.066 TEST_HEADER include/spdk/init.h 00:01:35.066 TEST_HEADER include/spdk/ioat_spec.h 00:01:35.066 TEST_HEADER include/spdk/ioat.h 00:01:35.066 TEST_HEADER include/spdk/iscsi_spec.h 00:01:35.066 TEST_HEADER include/spdk/json.h 00:01:35.066 TEST_HEADER include/spdk/jsonrpc.h 00:01:35.066 TEST_HEADER include/spdk/keyring.h 00:01:35.066 TEST_HEADER include/spdk/keyring_module.h 00:01:35.066 TEST_HEADER include/spdk/likely.h 00:01:35.066 TEST_HEADER include/spdk/log.h 00:01:35.066 TEST_HEADER include/spdk/lvol.h 00:01:35.066 TEST_HEADER include/spdk/mmio.h 00:01:35.066 TEST_HEADER include/spdk/memory.h 00:01:35.066 TEST_HEADER include/spdk/nbd.h 00:01:35.066 TEST_HEADER include/spdk/notify.h 00:01:35.066 TEST_HEADER include/spdk/nvme.h 00:01:35.066 TEST_HEADER include/spdk/nvme_intel.h 00:01:35.066 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:35.066 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:35.066 TEST_HEADER include/spdk/nvme_spec.h 00:01:35.066 TEST_HEADER include/spdk/nvme_zns.h 00:01:35.066 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:35.066 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:35.066 TEST_HEADER include/spdk/nvmf.h 00:01:35.066 TEST_HEADER include/spdk/nvmf_spec.h 00:01:35.066 TEST_HEADER include/spdk/nvmf_transport.h 00:01:35.066 TEST_HEADER include/spdk/opal.h 00:01:35.066 TEST_HEADER include/spdk/opal_spec.h 00:01:35.066 TEST_HEADER include/spdk/pci_ids.h 00:01:35.066 TEST_HEADER include/spdk/pipe.h 00:01:35.066 TEST_HEADER include/spdk/queue.h 00:01:35.066 TEST_HEADER include/spdk/reduce.h 00:01:35.066 TEST_HEADER include/spdk/rpc.h 00:01:35.066 TEST_HEADER include/spdk/scheduler.h 00:01:35.066 TEST_HEADER include/spdk/scsi.h 00:01:35.066 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:35.066 TEST_HEADER include/spdk/scsi_spec.h 00:01:35.066 TEST_HEADER include/spdk/sock.h 00:01:35.066 TEST_HEADER include/spdk/stdinc.h 00:01:35.066 TEST_HEADER include/spdk/string.h 00:01:35.066 TEST_HEADER include/spdk/thread.h 00:01:35.066 TEST_HEADER include/spdk/trace.h 00:01:35.066 TEST_HEADER include/spdk/trace_parser.h 00:01:35.066 TEST_HEADER include/spdk/tree.h 00:01:35.066 TEST_HEADER include/spdk/ublk.h 00:01:35.066 TEST_HEADER include/spdk/util.h 00:01:35.066 TEST_HEADER include/spdk/uuid.h 00:01:35.066 TEST_HEADER include/spdk/version.h 00:01:35.066 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:35.066 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:35.066 TEST_HEADER include/spdk/vhost.h 00:01:35.066 TEST_HEADER include/spdk/vmd.h 00:01:35.066 TEST_HEADER include/spdk/xor.h 00:01:35.066 TEST_HEADER include/spdk/zipf.h 00:01:35.066 CXX test/cpp_headers/accel.o 00:01:35.066 CXX test/cpp_headers/accel_module.o 00:01:35.066 CXX test/cpp_headers/assert.o 00:01:35.066 CXX test/cpp_headers/barrier.o 00:01:35.066 CXX test/cpp_headers/base64.o 00:01:35.066 CXX test/cpp_headers/bdev.o 00:01:35.066 CXX test/cpp_headers/bdev_module.o 00:01:35.066 CXX test/cpp_headers/bdev_zone.o 00:01:35.066 CXX test/cpp_headers/bit_array.o 00:01:35.066 CXX test/cpp_headers/bit_pool.o 00:01:35.066 CXX test/cpp_headers/blob_bdev.o 00:01:35.067 CXX test/cpp_headers/blobfs_bdev.o 00:01:35.067 CXX test/cpp_headers/blobfs.o 00:01:35.067 CXX test/cpp_headers/blob.o 00:01:35.067 CXX test/cpp_headers/conf.o 00:01:35.067 CXX test/cpp_headers/config.o 00:01:35.067 CXX test/cpp_headers/cpuset.o 00:01:35.067 CXX test/cpp_headers/crc16.o 00:01:35.067 CC app/iscsi_tgt/iscsi_tgt.o 00:01:35.067 CC app/spdk_dd/spdk_dd.o 00:01:35.067 CC app/nvmf_tgt/nvmf_main.o 00:01:35.067 CC test/app/histogram_perf/histogram_perf.o 00:01:35.067 CXX test/cpp_headers/crc32.o 00:01:35.067 CC examples/ioat/verify/verify.o 00:01:35.067 CC examples/ioat/perf/perf.o 00:01:35.067 CC test/thread/poller_perf/poller_perf.o 00:01:35.067 CC examples/util/zipf/zipf.o 00:01:35.067 CC test/env/pci/pci_ut.o 00:01:35.067 CC app/spdk_tgt/spdk_tgt.o 00:01:35.067 CC test/app/stub/stub.o 00:01:35.067 CC test/app/jsoncat/jsoncat.o 00:01:35.067 CC app/fio/nvme/fio_plugin.o 00:01:35.067 CC test/env/memory/memory_ut.o 00:01:35.067 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:35.067 CC test/env/vtophys/vtophys.o 00:01:35.067 CC test/dma/test_dma/test_dma.o 00:01:35.067 CC app/fio/bdev/fio_plugin.o 00:01:35.329 CC test/app/bdev_svc/bdev_svc.o 00:01:35.329 LINK spdk_lspci 00:01:35.329 CC test/env/mem_callbacks/mem_callbacks.o 00:01:35.329 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:35.329 LINK rpc_client_test 00:01:35.329 LINK spdk_nvme_discover 00:01:35.329 LINK histogram_perf 00:01:35.329 LINK interrupt_tgt 00:01:35.329 LINK jsoncat 00:01:35.329 CXX test/cpp_headers/crc64.o 00:01:35.329 CXX test/cpp_headers/dif.o 00:01:35.329 LINK zipf 00:01:35.329 LINK poller_perf 00:01:35.329 CXX test/cpp_headers/dma.o 00:01:35.329 CXX test/cpp_headers/endian.o 00:01:35.595 CXX test/cpp_headers/env_dpdk.o 00:01:35.595 CXX test/cpp_headers/env.o 00:01:35.595 CXX test/cpp_headers/event.o 00:01:35.595 LINK vtophys 00:01:35.595 CXX test/cpp_headers/fd_group.o 00:01:35.595 CXX test/cpp_headers/fd.o 00:01:35.595 LINK nvmf_tgt 00:01:35.595 LINK env_dpdk_post_init 00:01:35.595 CXX test/cpp_headers/file.o 00:01:35.595 CXX test/cpp_headers/ftl.o 00:01:35.595 LINK stub 00:01:35.595 LINK iscsi_tgt 00:01:35.595 CXX test/cpp_headers/gpt_spec.o 00:01:35.595 CXX test/cpp_headers/hexlify.o 00:01:35.595 LINK spdk_trace_record 00:01:35.595 CXX test/cpp_headers/histogram_data.o 00:01:35.595 CXX test/cpp_headers/idxd.o 00:01:35.595 LINK verify 00:01:35.595 LINK ioat_perf 00:01:35.595 LINK spdk_tgt 00:01:35.595 CXX test/cpp_headers/idxd_spec.o 00:01:35.595 CXX test/cpp_headers/init.o 00:01:35.595 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:35.595 LINK bdev_svc 00:01:35.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:35.595 CXX test/cpp_headers/ioat.o 00:01:35.595 CXX test/cpp_headers/ioat_spec.o 00:01:35.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:35.853 CXX test/cpp_headers/iscsi_spec.o 00:01:35.853 CXX test/cpp_headers/json.o 00:01:35.853 LINK spdk_dd 00:01:35.853 CXX test/cpp_headers/jsonrpc.o 00:01:35.853 CXX test/cpp_headers/keyring.o 00:01:35.853 CXX test/cpp_headers/keyring_module.o 00:01:35.853 LINK spdk_trace 00:01:35.853 CXX test/cpp_headers/likely.o 00:01:35.853 CXX test/cpp_headers/log.o 00:01:35.853 CXX test/cpp_headers/lvol.o 00:01:35.853 CXX test/cpp_headers/memory.o 00:01:35.853 CXX test/cpp_headers/mmio.o 00:01:35.853 LINK pci_ut 00:01:35.853 CXX test/cpp_headers/nbd.o 00:01:35.853 CXX test/cpp_headers/notify.o 00:01:35.853 CXX test/cpp_headers/nvme.o 00:01:35.853 CXX test/cpp_headers/nvme_intel.o 00:01:35.853 CXX test/cpp_headers/nvme_ocssd.o 00:01:35.853 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:35.853 CXX test/cpp_headers/nvme_spec.o 00:01:35.853 CXX test/cpp_headers/nvme_zns.o 00:01:35.853 LINK test_dma 00:01:35.853 CXX test/cpp_headers/nvmf_cmd.o 00:01:35.853 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:35.853 CXX test/cpp_headers/nvmf.o 00:01:35.853 CXX test/cpp_headers/nvmf_spec.o 00:01:35.853 CXX test/cpp_headers/nvmf_transport.o 00:01:35.853 CXX test/cpp_headers/opal.o 00:01:35.853 CXX test/cpp_headers/opal_spec.o 00:01:36.115 CXX test/cpp_headers/pci_ids.o 00:01:36.115 CXX test/cpp_headers/pipe.o 00:01:36.115 LINK nvme_fuzz 00:01:36.115 CXX test/cpp_headers/queue.o 00:01:36.115 CC test/event/event_perf/event_perf.o 00:01:36.115 CXX test/cpp_headers/reduce.o 00:01:36.115 CXX test/cpp_headers/rpc.o 00:01:36.115 CC examples/sock/hello_world/hello_sock.o 00:01:36.115 CC examples/idxd/perf/perf.o 00:01:36.115 CXX test/cpp_headers/scheduler.o 00:01:36.115 CC examples/vmd/lsvmd/lsvmd.o 00:01:36.115 CXX test/cpp_headers/scsi.o 00:01:36.115 LINK spdk_nvme 00:01:36.115 LINK spdk_bdev 00:01:36.115 CC test/event/reactor/reactor.o 00:01:36.115 CC examples/vmd/led/led.o 00:01:36.115 CC examples/thread/thread/thread_ex.o 00:01:36.115 CC test/event/reactor_perf/reactor_perf.o 00:01:36.375 CC test/event/app_repeat/app_repeat.o 00:01:36.375 CXX test/cpp_headers/scsi_spec.o 00:01:36.375 CXX test/cpp_headers/sock.o 00:01:36.375 CXX test/cpp_headers/stdinc.o 00:01:36.375 CXX test/cpp_headers/string.o 00:01:36.375 CXX test/cpp_headers/thread.o 00:01:36.375 CXX test/cpp_headers/trace.o 00:01:36.375 CXX test/cpp_headers/trace_parser.o 00:01:36.375 CXX test/cpp_headers/tree.o 00:01:36.375 CC test/event/scheduler/scheduler.o 00:01:36.375 CXX test/cpp_headers/ublk.o 00:01:36.375 CXX test/cpp_headers/util.o 00:01:36.375 CXX test/cpp_headers/uuid.o 00:01:36.375 CXX test/cpp_headers/version.o 00:01:36.375 CXX test/cpp_headers/vfio_user_pci.o 00:01:36.375 CXX test/cpp_headers/vfio_user_spec.o 00:01:36.375 CXX test/cpp_headers/vhost.o 00:01:36.375 CXX test/cpp_headers/vmd.o 00:01:36.375 CXX test/cpp_headers/xor.o 00:01:36.375 CXX test/cpp_headers/zipf.o 00:01:36.375 CC app/vhost/vhost.o 00:01:36.375 LINK event_perf 00:01:36.375 LINK lsvmd 00:01:36.375 LINK reactor 00:01:36.375 LINK spdk_nvme_perf 00:01:36.638 LINK led 00:01:36.638 LINK mem_callbacks 00:01:36.638 LINK reactor_perf 00:01:36.638 LINK vhost_fuzz 00:01:36.638 LINK spdk_nvme_identify 00:01:36.638 LINK app_repeat 00:01:36.638 LINK spdk_top 00:01:36.638 LINK hello_sock 00:01:36.638 CC test/nvme/sgl/sgl.o 00:01:36.638 CC test/nvme/overhead/overhead.o 00:01:36.638 CC test/nvme/aer/aer.o 00:01:36.638 LINK thread 00:01:36.638 CC test/nvme/e2edp/nvme_dp.o 00:01:36.638 CC test/nvme/startup/startup.o 00:01:36.638 CC test/nvme/reset/reset.o 00:01:36.638 CC test/nvme/reserve/reserve.o 00:01:36.638 CC test/nvme/simple_copy/simple_copy.o 00:01:36.638 CC test/nvme/err_injection/err_injection.o 00:01:36.638 CC test/blobfs/mkfs/mkfs.o 00:01:36.638 CC test/accel/dif/dif.o 00:01:36.638 CC test/nvme/connect_stress/connect_stress.o 00:01:36.638 CC test/nvme/boot_partition/boot_partition.o 00:01:36.638 CC test/nvme/compliance/nvme_compliance.o 00:01:36.638 CC test/nvme/fused_ordering/fused_ordering.o 00:01:36.896 CC test/nvme/fdp/fdp.o 00:01:36.896 LINK scheduler 00:01:36.896 LINK vhost 00:01:36.896 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:36.896 CC test/lvol/esnap/esnap.o 00:01:36.896 CC test/nvme/cuse/cuse.o 00:01:36.896 LINK idxd_perf 00:01:36.896 LINK err_injection 00:01:36.896 LINK boot_partition 00:01:36.896 LINK mkfs 00:01:36.896 LINK simple_copy 00:01:36.896 LINK startup 00:01:37.154 LINK doorbell_aers 00:01:37.154 LINK reserve 00:01:37.154 LINK reset 00:01:37.154 LINK sgl 00:01:37.154 LINK connect_stress 00:01:37.154 CC examples/nvme/abort/abort.o 00:01:37.154 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:37.154 CC examples/nvme/reconnect/reconnect.o 00:01:37.154 CC examples/nvme/hotplug/hotplug.o 00:01:37.154 CC examples/nvme/arbitration/arbitration.o 00:01:37.154 CC examples/nvme/hello_world/hello_world.o 00:01:37.154 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:37.154 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:37.154 LINK overhead 00:01:37.154 LINK nvme_compliance 00:01:37.154 LINK nvme_dp 00:01:37.154 LINK fused_ordering 00:01:37.154 CC examples/accel/perf/accel_perf.o 00:01:37.154 LINK aer 00:01:37.154 LINK memory_ut 00:01:37.154 CC examples/blob/cli/blobcli.o 00:01:37.154 CC examples/blob/hello_world/hello_blob.o 00:01:37.412 LINK fdp 00:01:37.412 LINK pmr_persistence 00:01:37.412 LINK cmb_copy 00:01:37.412 LINK hello_world 00:01:37.412 LINK hotplug 00:01:37.412 LINK dif 00:01:37.412 LINK reconnect 00:01:37.412 LINK abort 00:01:37.412 LINK hello_blob 00:01:37.670 LINK arbitration 00:01:37.670 LINK accel_perf 00:01:37.670 LINK nvme_manage 00:01:37.929 LINK blobcli 00:01:37.929 CC test/bdev/bdevio/bdevio.o 00:01:37.929 CC examples/bdev/hello_world/hello_bdev.o 00:01:37.929 CC examples/bdev/bdevperf/bdevperf.o 00:01:38.187 LINK iscsi_fuzz 00:01:38.187 LINK bdevio 00:01:38.187 LINK cuse 00:01:38.187 LINK hello_bdev 00:01:38.752 LINK bdevperf 00:01:39.318 CC examples/nvmf/nvmf/nvmf.o 00:01:39.602 LINK nvmf 00:01:42.133 LINK esnap 00:01:42.133 00:01:42.133 real 0m49.437s 00:01:42.133 user 10m6.332s 00:01:42.133 sys 2m26.596s 00:01:42.133 17:24:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:42.133 17:24:37 make -- common/autotest_common.sh@10 -- $ set +x 00:01:42.133 ************************************ 00:01:42.133 END TEST make 00:01:42.133 ************************************ 00:01:42.133 17:24:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:42.133 17:24:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:42.133 17:24:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:42.133 17:24:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:42.133 17:24:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.133 17:24:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:42.133 17:24:37 -- pm/common@44 -- $ pid=2020578 00:01:42.133 17:24:37 -- pm/common@50 -- $ kill -TERM 2020578 00:01:42.133 17:24:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.133 17:24:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:42.133 17:24:37 -- pm/common@44 -- $ pid=2020580 00:01:42.133 17:24:37 -- pm/common@50 -- $ kill -TERM 2020580 00:01:42.133 17:24:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.133 17:24:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:42.133 17:24:37 -- pm/common@44 -- $ pid=2020582 00:01:42.133 17:24:37 -- pm/common@50 -- $ kill -TERM 2020582 00:01:42.133 17:24:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.133 17:24:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:42.133 17:24:37 -- pm/common@44 -- $ pid=2020610 00:01:42.133 17:24:37 -- pm/common@50 -- $ sudo -E kill -TERM 2020610 00:01:42.133 17:24:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:42.133 17:24:37 -- nvmf/common.sh@7 -- # uname -s 00:01:42.133 17:24:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:42.133 17:24:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:42.133 17:24:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:42.133 17:24:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:42.133 17:24:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:42.133 17:24:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:42.134 17:24:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:42.134 17:24:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:42.134 17:24:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:42.134 17:24:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:42.134 17:24:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:42.134 17:24:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:42.134 17:24:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:42.134 17:24:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:42.134 17:24:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:42.134 17:24:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:42.134 17:24:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:42.134 17:24:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:42.134 17:24:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:42.134 17:24:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:42.134 17:24:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.134 17:24:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.134 17:24:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.134 17:24:37 -- paths/export.sh@5 -- # export PATH 00:01:42.134 17:24:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.134 17:24:37 -- nvmf/common.sh@47 -- # : 0 00:01:42.134 17:24:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:42.134 17:24:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:42.134 17:24:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:42.134 17:24:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:42.134 17:24:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:42.134 17:24:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:42.134 17:24:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:42.134 17:24:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:42.134 17:24:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:42.134 17:24:37 -- spdk/autotest.sh@32 -- # uname -s 00:01:42.134 17:24:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:42.134 17:24:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:42.134 17:24:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:42.134 17:24:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:42.134 17:24:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:42.134 17:24:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:42.134 17:24:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:42.134 17:24:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:42.134 17:24:37 -- spdk/autotest.sh@48 -- # udevadm_pid=2076668 00:01:42.134 17:24:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:42.134 17:24:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:42.134 17:24:37 -- pm/common@17 -- # local monitor 00:01:42.134 17:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.134 17:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.134 17:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.134 17:24:37 -- pm/common@21 -- # date +%s 00:01:42.134 17:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.134 17:24:37 -- pm/common@21 -- # date +%s 00:01:42.134 17:24:37 -- pm/common@25 -- # sleep 1 00:01:42.134 17:24:37 -- pm/common@21 -- # date +%s 00:01:42.134 17:24:37 -- pm/common@21 -- # date +%s 00:01:42.134 17:24:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721057077 00:01:42.134 17:24:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721057077 00:01:42.134 17:24:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721057077 00:01:42.134 17:24:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721057077 00:01:42.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721057077_collect-vmstat.pm.log 00:01:42.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721057077_collect-cpu-load.pm.log 00:01:42.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721057077_collect-cpu-temp.pm.log 00:01:42.134 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721057077_collect-bmc-pm.bmc.pm.log 00:01:43.073 17:24:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:43.073 17:24:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:43.073 17:24:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:43.073 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:01:43.073 17:24:38 -- spdk/autotest.sh@59 -- # create_test_list 00:01:43.073 17:24:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:43.073 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:01:43.332 17:24:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:43.332 17:24:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.332 17:24:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.332 17:24:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.332 17:24:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.332 17:24:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:43.332 17:24:38 -- common/autotest_common.sh@1455 -- # uname 00:01:43.332 17:24:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:43.332 17:24:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:43.332 17:24:38 -- common/autotest_common.sh@1475 -- # uname 00:01:43.332 17:24:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:43.332 17:24:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:43.332 17:24:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:43.332 17:24:38 -- spdk/autotest.sh@72 -- # hash lcov 00:01:43.332 17:24:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:43.332 17:24:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:43.332 --rc lcov_branch_coverage=1 00:01:43.332 --rc lcov_function_coverage=1 00:01:43.332 --rc genhtml_branch_coverage=1 00:01:43.332 --rc genhtml_function_coverage=1 00:01:43.332 --rc genhtml_legend=1 00:01:43.332 --rc geninfo_all_blocks=1 00:01:43.332 ' 00:01:43.332 17:24:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:43.332 --rc lcov_branch_coverage=1 00:01:43.332 --rc lcov_function_coverage=1 00:01:43.332 --rc genhtml_branch_coverage=1 00:01:43.332 --rc genhtml_function_coverage=1 00:01:43.332 --rc genhtml_legend=1 00:01:43.332 --rc geninfo_all_blocks=1 00:01:43.332 ' 00:01:43.332 17:24:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:43.332 --rc lcov_branch_coverage=1 00:01:43.332 --rc lcov_function_coverage=1 00:01:43.332 --rc genhtml_branch_coverage=1 00:01:43.332 --rc genhtml_function_coverage=1 00:01:43.332 --rc genhtml_legend=1 00:01:43.332 --rc geninfo_all_blocks=1 00:01:43.332 --no-external' 00:01:43.332 17:24:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:43.332 --rc lcov_branch_coverage=1 00:01:43.332 --rc lcov_function_coverage=1 00:01:43.332 --rc genhtml_branch_coverage=1 00:01:43.332 --rc genhtml_function_coverage=1 00:01:43.332 --rc genhtml_legend=1 00:01:43.332 --rc geninfo_all_blocks=1 00:01:43.332 --no-external' 00:01:43.332 17:24:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:43.332 lcov: LCOV version 1.14 00:01:43.332 17:24:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:45.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:45.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:45.273 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:00.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:00.147 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:18.240 17:25:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:18.240 17:25:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.240 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:02:18.240 17:25:11 -- spdk/autotest.sh@91 -- # rm -f 00:02:18.240 17:25:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:18.240 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:18.240 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:18.240 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:18.240 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:18.240 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:18.240 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:18.240 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:18.240 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:18.240 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:18.240 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:18.240 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:18.240 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:18.240 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:18.240 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:18.240 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:18.240 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:18.240 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:18.240 17:25:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:18.240 17:25:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:18.240 17:25:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:18.240 17:25:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:18.240 17:25:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:18.240 17:25:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:18.240 17:25:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:18.240 17:25:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:18.240 17:25:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:18.240 17:25:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:18.240 17:25:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:18.240 17:25:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:18.240 17:25:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:18.240 17:25:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:18.240 17:25:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:18.240 No valid GPT data, bailing 00:02:18.240 17:25:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:18.240 17:25:13 -- scripts/common.sh@391 -- # pt= 00:02:18.240 17:25:13 -- scripts/common.sh@392 -- # return 1 00:02:18.240 17:25:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:18.240 1+0 records in 00:02:18.240 1+0 records out 00:02:18.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218627 s, 480 MB/s 00:02:18.240 17:25:13 -- spdk/autotest.sh@118 -- # sync 00:02:18.240 17:25:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:18.240 17:25:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:18.240 17:25:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:20.140 17:25:14 -- spdk/autotest.sh@124 -- # uname -s 00:02:20.140 17:25:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:20.140 17:25:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.140 17:25:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:20.140 17:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:20.140 17:25:14 -- common/autotest_common.sh@10 -- # set +x 00:02:20.140 ************************************ 00:02:20.140 START TEST setup.sh 00:02:20.140 ************************************ 00:02:20.140 17:25:15 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.140 * Looking for test storage... 00:02:20.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:20.140 17:25:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:20.140 17:25:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:20.140 17:25:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:20.140 17:25:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:20.140 17:25:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:20.140 17:25:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:20.140 ************************************ 00:02:20.140 START TEST acl 00:02:20.140 ************************************ 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:20.140 * Looking for test storage... 00:02:20.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.140 17:25:15 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:20.140 17:25:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:20.140 17:25:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.140 17:25:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.510 17:25:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:21.510 17:25:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:21.510 17:25:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.510 17:25:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:21.510 17:25:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:21.510 17:25:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:22.884 Hugepages 00:02:22.884 node hugesize free / total 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 00:02:22.884 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:22.884 17:25:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:22.884 17:25:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:22.884 17:25:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:22.884 17:25:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:22.884 ************************************ 00:02:22.884 START TEST denied 00:02:22.884 ************************************ 00:02:22.884 17:25:17 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:22.884 17:25:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:22.884 17:25:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:22.884 17:25:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:22.884 17:25:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:22.884 17:25:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:24.259 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:24.259 17:25:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.794 00:02:26.794 real 0m3.703s 00:02:26.794 user 0m1.048s 00:02:26.794 sys 0m1.751s 00:02:26.794 17:25:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:26.794 17:25:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:26.794 ************************************ 00:02:26.794 END TEST denied 00:02:26.794 ************************************ 00:02:26.794 17:25:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:26.794 17:25:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:26.794 17:25:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.794 17:25:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.794 17:25:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.794 ************************************ 00:02:26.794 START TEST allowed 00:02:26.794 ************************************ 00:02:26.794 17:25:21 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:26.794 17:25:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:26.794 17:25:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:26.794 17:25:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:26.794 17:25:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.794 17:25:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.697 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.697 17:25:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:28.697 17:25:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:28.697 17:25:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:28.697 17:25:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.697 17:25:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.600 00:02:30.600 real 0m3.799s 00:02:30.600 user 0m0.976s 00:02:30.600 sys 0m1.650s 00:02:30.600 17:25:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:30.600 17:25:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:30.600 ************************************ 00:02:30.600 END TEST allowed 00:02:30.600 ************************************ 00:02:30.600 17:25:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:30.600 00:02:30.600 real 0m10.232s 00:02:30.600 user 0m3.072s 00:02:30.600 sys 0m5.150s 00:02:30.600 17:25:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:30.600 17:25:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.600 ************************************ 00:02:30.600 END TEST acl 00:02:30.600 ************************************ 00:02:30.600 17:25:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:30.600 17:25:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:30.600 17:25:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.600 17:25:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.600 17:25:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:30.600 ************************************ 00:02:30.600 START TEST hugepages 00:02:30.600 ************************************ 00:02:30.600 17:25:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:30.600 * Looking for test storage... 00:02:30.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43743832 kB' 'MemAvailable: 47245344 kB' 'Buffers: 2704 kB' 'Cached: 10227232 kB' 'SwapCached: 0 kB' 'Active: 7226364 kB' 'Inactive: 3506596 kB' 'Active(anon): 6831772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506432 kB' 'Mapped: 209780 kB' 'Shmem: 6328748 kB' 'KReclaimable: 188324 kB' 'Slab: 556592 kB' 'SReclaimable: 188324 kB' 'SUnreclaim: 368268 kB' 'KernelStack: 12864 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7944016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.600 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:30.601 17:25:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:30.601 17:25:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.601 17:25:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.601 17:25:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:30.601 ************************************ 00:02:30.601 START TEST default_setup 00:02:30.601 ************************************ 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.601 17:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.532 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:31.532 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:31.532 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:31.532 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:31.790 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:31.790 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:31.790 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:31.790 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:31.790 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:32.754 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45860680 kB' 'MemAvailable: 49362176 kB' 'Buffers: 2704 kB' 'Cached: 10227328 kB' 'SwapCached: 0 kB' 'Active: 7245688 kB' 'Inactive: 3506596 kB' 'Active(anon): 6851096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525604 kB' 'Mapped: 209916 kB' 'Shmem: 6328844 kB' 'KReclaimable: 188292 kB' 'Slab: 556588 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368296 kB' 'KernelStack: 12816 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7967896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45860060 kB' 'MemAvailable: 49361556 kB' 'Buffers: 2704 kB' 'Cached: 10227332 kB' 'SwapCached: 0 kB' 'Active: 7245364 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525164 kB' 'Mapped: 209884 kB' 'Shmem: 6328848 kB' 'KReclaimable: 188292 kB' 'Slab: 556576 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368284 kB' 'KernelStack: 12736 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45860060 kB' 'MemAvailable: 49361556 kB' 'Buffers: 2704 kB' 'Cached: 10227336 kB' 'SwapCached: 0 kB' 'Active: 7245236 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525072 kB' 'Mapped: 209884 kB' 'Shmem: 6328852 kB' 'KReclaimable: 188292 kB' 'Slab: 556576 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368284 kB' 'KernelStack: 12752 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.757 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:32.758 nr_hugepages=1024 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:32.758 resv_hugepages=0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:32.758 surplus_hugepages=0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:32.758 anon_hugepages=0 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45860172 kB' 'MemAvailable: 49361668 kB' 'Buffers: 2704 kB' 'Cached: 10227372 kB' 'SwapCached: 0 kB' 'Active: 7244972 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524724 kB' 'Mapped: 209808 kB' 'Shmem: 6328888 kB' 'KReclaimable: 188292 kB' 'Slab: 556544 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368252 kB' 'KernelStack: 12800 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:33.018 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21276788 kB' 'MemUsed: 11600152 kB' 'SwapCached: 0 kB' 'Active: 5073804 kB' 'Inactive: 3264144 kB' 'Active(anon): 4885232 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034076 kB' 'Mapped: 73908 kB' 'AnonPages: 307040 kB' 'Shmem: 4581360 kB' 'KernelStack: 7640 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313456 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 199084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.019 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:33.020 node0=1024 expecting 1024 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:33.020 00:02:33.020 real 0m2.431s 00:02:33.020 user 0m0.688s 00:02:33.020 sys 0m0.861s 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:33.020 17:25:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:33.020 ************************************ 00:02:33.020 END TEST default_setup 00:02:33.020 ************************************ 00:02:33.020 17:25:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:33.020 17:25:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:33.020 17:25:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:33.020 17:25:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:33.020 17:25:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.020 ************************************ 00:02:33.020 START TEST per_node_1G_alloc 00:02:33.020 ************************************ 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.020 17:25:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:33.955 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.955 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:33.955 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.955 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.955 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.955 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.955 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.955 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.955 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:33.955 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:33.955 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:33.955 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:33.955 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:33.955 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:33.955 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:33.955 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:33.955 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45902756 kB' 'MemAvailable: 49404252 kB' 'Buffers: 2704 kB' 'Cached: 10227436 kB' 'SwapCached: 0 kB' 'Active: 7244756 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524504 kB' 'Mapped: 209844 kB' 'Shmem: 6328952 kB' 'KReclaimable: 188292 kB' 'Slab: 556808 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368516 kB' 'KernelStack: 12784 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.222 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45902000 kB' 'MemAvailable: 49403496 kB' 'Buffers: 2704 kB' 'Cached: 10227440 kB' 'SwapCached: 0 kB' 'Active: 7245116 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850524 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524832 kB' 'Mapped: 209820 kB' 'Shmem: 6328956 kB' 'KReclaimable: 188292 kB' 'Slab: 556792 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368500 kB' 'KernelStack: 12800 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.223 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.224 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45902084 kB' 'MemAvailable: 49403580 kB' 'Buffers: 2704 kB' 'Cached: 10227456 kB' 'SwapCached: 0 kB' 'Active: 7245064 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850472 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524836 kB' 'Mapped: 209820 kB' 'Shmem: 6328972 kB' 'KReclaimable: 188292 kB' 'Slab: 556872 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368580 kB' 'KernelStack: 12816 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.225 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.226 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:34.227 nr_hugepages=1024 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.227 resv_hugepages=0 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.227 surplus_hugepages=0 00:02:34.227 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.228 anon_hugepages=0 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45902484 kB' 'MemAvailable: 49403980 kB' 'Buffers: 2704 kB' 'Cached: 10227480 kB' 'SwapCached: 0 kB' 'Active: 7245148 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850556 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524836 kB' 'Mapped: 209820 kB' 'Shmem: 6328996 kB' 'KReclaimable: 188292 kB' 'Slab: 556872 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368580 kB' 'KernelStack: 12816 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.228 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.229 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22337112 kB' 'MemUsed: 10539828 kB' 'SwapCached: 0 kB' 'Active: 5073640 kB' 'Inactive: 3264144 kB' 'Active(anon): 4885068 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034076 kB' 'Mapped: 73920 kB' 'AnonPages: 306848 kB' 'Shmem: 4581360 kB' 'KernelStack: 7640 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313660 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 199288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.230 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.231 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.232 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23565628 kB' 'MemUsed: 4099124 kB' 'SwapCached: 0 kB' 'Active: 2171588 kB' 'Inactive: 242452 kB' 'Active(anon): 1965568 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2196136 kB' 'Mapped: 135900 kB' 'AnonPages: 218028 kB' 'Shmem: 1747664 kB' 'KernelStack: 5192 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73920 kB' 'Slab: 243212 kB' 'SReclaimable: 73920 kB' 'SUnreclaim: 169292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.233 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.234 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:34.234 node0=512 expecting 512 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:34.235 node1=512 expecting 512 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:34.235 00:02:34.235 real 0m1.342s 00:02:34.235 user 0m0.570s 00:02:34.235 sys 0m0.731s 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.235 17:25:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:34.235 ************************************ 00:02:34.235 END TEST per_node_1G_alloc 00:02:34.235 ************************************ 00:02:34.235 17:25:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:34.235 17:25:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:34.235 17:25:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.235 17:25:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.235 17:25:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.493 ************************************ 00:02:34.493 START TEST even_2G_alloc 00:02:34.493 ************************************ 00:02:34.493 17:25:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:34.493 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:34.493 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.494 17:25:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:35.431 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.431 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:35.431 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.431 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.431 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.431 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.431 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.431 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.431 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.431 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:35.431 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:35.431 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:35.431 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:35.431 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:35.431 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:35.431 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:35.431 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45891280 kB' 'MemAvailable: 49392776 kB' 'Buffers: 2704 kB' 'Cached: 10227576 kB' 'SwapCached: 0 kB' 'Active: 7245420 kB' 'Inactive: 3506596 kB' 'Active(anon): 6850828 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524928 kB' 'Mapped: 209968 kB' 'Shmem: 6329092 kB' 'KReclaimable: 188292 kB' 'Slab: 556820 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368528 kB' 'KernelStack: 12784 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.431 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.432 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45891028 kB' 'MemAvailable: 49392524 kB' 'Buffers: 2704 kB' 'Cached: 10227580 kB' 'SwapCached: 0 kB' 'Active: 7245720 kB' 'Inactive: 3506596 kB' 'Active(anon): 6851128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525260 kB' 'Mapped: 209936 kB' 'Shmem: 6329096 kB' 'KReclaimable: 188292 kB' 'Slab: 556804 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368512 kB' 'KernelStack: 12832 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.696 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.697 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45891436 kB' 'MemAvailable: 49392932 kB' 'Buffers: 2704 kB' 'Cached: 10227596 kB' 'SwapCached: 0 kB' 'Active: 7245664 kB' 'Inactive: 3506596 kB' 'Active(anon): 6851072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525100 kB' 'Mapped: 209824 kB' 'Shmem: 6329112 kB' 'KReclaimable: 188292 kB' 'Slab: 556800 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368508 kB' 'KernelStack: 12848 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.698 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.699 nr_hugepages=1024 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.699 resv_hugepages=0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.699 surplus_hugepages=0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.699 anon_hugepages=0 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45893116 kB' 'MemAvailable: 49394612 kB' 'Buffers: 2704 kB' 'Cached: 10227620 kB' 'SwapCached: 0 kB' 'Active: 7245688 kB' 'Inactive: 3506596 kB' 'Active(anon): 6851096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525096 kB' 'Mapped: 209824 kB' 'Shmem: 6329136 kB' 'KReclaimable: 188292 kB' 'Slab: 556772 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368480 kB' 'KernelStack: 12848 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7965596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.699 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.700 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22339888 kB' 'MemUsed: 10537052 kB' 'SwapCached: 0 kB' 'Active: 5073796 kB' 'Inactive: 3264144 kB' 'Active(anon): 4885224 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034080 kB' 'Mapped: 73924 kB' 'AnonPages: 306960 kB' 'Shmem: 4581364 kB' 'KernelStack: 7656 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313496 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 199124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.701 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:35.702 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23553876 kB' 'MemUsed: 4110876 kB' 'SwapCached: 0 kB' 'Active: 2171892 kB' 'Inactive: 242452 kB' 'Active(anon): 1965872 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2196284 kB' 'Mapped: 135900 kB' 'AnonPages: 218144 kB' 'Shmem: 1747812 kB' 'KernelStack: 5192 kB' 'PageTables: 3556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73920 kB' 'Slab: 243276 kB' 'SReclaimable: 73920 kB' 'SUnreclaim: 169356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.703 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:35.704 node0=512 expecting 512 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:35.704 node1=512 expecting 512 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:35.704 00:02:35.704 real 0m1.340s 00:02:35.704 user 0m0.554s 00:02:35.704 sys 0m0.741s 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:35.704 17:25:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:35.704 ************************************ 00:02:35.704 END TEST even_2G_alloc 00:02:35.704 ************************************ 00:02:35.704 17:25:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:35.704 17:25:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:35.704 17:25:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.704 17:25:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.704 17:25:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.704 ************************************ 00:02:35.704 START TEST odd_alloc 00:02:35.704 ************************************ 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.704 17:25:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.639 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.639 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.639 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.639 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.639 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.639 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.639 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.639 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.639 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.639 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.639 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.639 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.639 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.639 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.902 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.902 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.902 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.902 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45882856 kB' 'MemAvailable: 49384352 kB' 'Buffers: 2704 kB' 'Cached: 10227712 kB' 'SwapCached: 0 kB' 'Active: 7242564 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847972 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 209000 kB' 'Shmem: 6329228 kB' 'KReclaimable: 188292 kB' 'Slab: 556564 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368272 kB' 'KernelStack: 12800 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7950484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.903 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45883500 kB' 'MemAvailable: 49384996 kB' 'Buffers: 2704 kB' 'Cached: 10227716 kB' 'SwapCached: 0 kB' 'Active: 7242232 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847640 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521728 kB' 'Mapped: 208984 kB' 'Shmem: 6329232 kB' 'KReclaimable: 188292 kB' 'Slab: 556564 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368272 kB' 'KernelStack: 12800 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7950504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.904 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.905 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45887976 kB' 'MemAvailable: 49389472 kB' 'Buffers: 2704 kB' 'Cached: 10227732 kB' 'SwapCached: 0 kB' 'Active: 7242116 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847524 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521564 kB' 'Mapped: 208904 kB' 'Shmem: 6329248 kB' 'KReclaimable: 188292 kB' 'Slab: 556564 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368272 kB' 'KernelStack: 12800 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7950524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.906 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:36.907 nr_hugepages=1025 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.907 resv_hugepages=0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.907 surplus_hugepages=0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.907 anon_hugepages=0 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.907 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45887976 kB' 'MemAvailable: 49389472 kB' 'Buffers: 2704 kB' 'Cached: 10227732 kB' 'SwapCached: 0 kB' 'Active: 7241796 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521244 kB' 'Mapped: 208904 kB' 'Shmem: 6329248 kB' 'KReclaimable: 188292 kB' 'Slab: 556564 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368272 kB' 'KernelStack: 12784 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7950544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.908 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.909 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22338208 kB' 'MemUsed: 10538732 kB' 'SwapCached: 0 kB' 'Active: 5072164 kB' 'Inactive: 3264144 kB' 'Active(anon): 4883592 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034112 kB' 'Mapped: 73224 kB' 'AnonPages: 305352 kB' 'Shmem: 4581396 kB' 'KernelStack: 7656 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313400 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 199028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.170 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23549656 kB' 'MemUsed: 4115096 kB' 'SwapCached: 0 kB' 'Active: 2169816 kB' 'Inactive: 242452 kB' 'Active(anon): 1963796 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2196328 kB' 'Mapped: 135680 kB' 'AnonPages: 216072 kB' 'Shmem: 1747856 kB' 'KernelStack: 5144 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73920 kB' 'Slab: 243164 kB' 'SReclaimable: 73920 kB' 'SUnreclaim: 169244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.171 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:37.172 node0=512 expecting 513 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:37.172 node1=513 expecting 512 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:37.172 00:02:37.172 real 0m1.334s 00:02:37.172 user 0m0.604s 00:02:37.172 sys 0m0.689s 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:37.172 17:25:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.172 ************************************ 00:02:37.172 END TEST odd_alloc 00:02:37.172 ************************************ 00:02:37.172 17:25:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:37.172 17:25:32 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:37.172 17:25:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.172 17:25:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.172 17:25:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.172 ************************************ 00:02:37.172 START TEST custom_alloc 00:02:37.172 ************************************ 00:02:37.172 17:25:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:37.172 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:37.172 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:37.172 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.173 17:25:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.106 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.106 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.106 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.106 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.106 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.106 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.106 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.106 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.106 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.106 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.106 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.369 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.369 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.369 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.369 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.369 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.369 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44810436 kB' 'MemAvailable: 48311932 kB' 'Buffers: 2704 kB' 'Cached: 10227844 kB' 'SwapCached: 0 kB' 'Active: 7242644 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848052 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521896 kB' 'Mapped: 209048 kB' 'Shmem: 6329360 kB' 'KReclaimable: 188292 kB' 'Slab: 556420 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368128 kB' 'KernelStack: 12816 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7950744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.369 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44812900 kB' 'MemAvailable: 48314396 kB' 'Buffers: 2704 kB' 'Cached: 10227848 kB' 'SwapCached: 0 kB' 'Active: 7242260 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521492 kB' 'Mapped: 208912 kB' 'Shmem: 6329364 kB' 'KReclaimable: 188292 kB' 'Slab: 556376 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368084 kB' 'KernelStack: 12816 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7950764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.370 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44812900 kB' 'MemAvailable: 48314396 kB' 'Buffers: 2704 kB' 'Cached: 10227864 kB' 'SwapCached: 0 kB' 'Active: 7242400 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847808 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521640 kB' 'Mapped: 208912 kB' 'Shmem: 6329380 kB' 'KReclaimable: 188292 kB' 'Slab: 556376 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368084 kB' 'KernelStack: 12848 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7950784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.371 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.372 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:38.373 nr_hugepages=1536 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.373 resv_hugepages=0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.373 surplus_hugepages=0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.373 anon_hugepages=0 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.373 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44813332 kB' 'MemAvailable: 48314828 kB' 'Buffers: 2704 kB' 'Cached: 10227888 kB' 'SwapCached: 0 kB' 'Active: 7242388 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847796 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521644 kB' 'Mapped: 208912 kB' 'Shmem: 6329404 kB' 'KReclaimable: 188292 kB' 'Slab: 556376 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368084 kB' 'KernelStack: 12848 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7950804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.374 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.655 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.656 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22322812 kB' 'MemUsed: 10554128 kB' 'SwapCached: 0 kB' 'Active: 5072192 kB' 'Inactive: 3264144 kB' 'Active(anon): 4883620 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034208 kB' 'Mapped: 73232 kB' 'AnonPages: 305228 kB' 'Shmem: 4581492 kB' 'KernelStack: 7688 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313316 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 198944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.657 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22490920 kB' 'MemUsed: 5173832 kB' 'SwapCached: 0 kB' 'Active: 2170576 kB' 'Inactive: 242452 kB' 'Active(anon): 1964556 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2196424 kB' 'Mapped: 135680 kB' 'AnonPages: 216728 kB' 'Shmem: 1747952 kB' 'KernelStack: 5160 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73920 kB' 'Slab: 243060 kB' 'SReclaimable: 73920 kB' 'SUnreclaim: 169140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.658 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.659 node0=512 expecting 512 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:38.659 node1=1024 expecting 1024 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:38.659 00:02:38.659 real 0m1.450s 00:02:38.659 user 0m0.645s 00:02:38.659 sys 0m0.768s 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.659 17:25:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.659 ************************************ 00:02:38.659 END TEST custom_alloc 00:02:38.659 ************************************ 00:02:38.659 17:25:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:38.659 17:25:33 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:38.659 17:25:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.659 17:25:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.659 17:25:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:38.659 ************************************ 00:02:38.659 START TEST no_shrink_alloc 00:02:38.659 ************************************ 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:38.659 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.660 17:25:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.597 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.597 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.597 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.597 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.597 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.597 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.597 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.597 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.597 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.597 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.597 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.597 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.597 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.597 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.597 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.597 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.597 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45844200 kB' 'MemAvailable: 49345696 kB' 'Buffers: 2704 kB' 'Cached: 10227968 kB' 'SwapCached: 0 kB' 'Active: 7242432 kB' 'Inactive: 3506596 kB' 'Active(anon): 6847840 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521540 kB' 'Mapped: 209052 kB' 'Shmem: 6329484 kB' 'KReclaimable: 188292 kB' 'Slab: 556296 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368004 kB' 'KernelStack: 12816 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.862 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.863 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45844512 kB' 'MemAvailable: 49346008 kB' 'Buffers: 2704 kB' 'Cached: 10227968 kB' 'SwapCached: 0 kB' 'Active: 7243040 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848448 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522208 kB' 'Mapped: 209012 kB' 'Shmem: 6329484 kB' 'KReclaimable: 188292 kB' 'Slab: 556296 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368004 kB' 'KernelStack: 12880 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.864 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.865 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45844820 kB' 'MemAvailable: 49346316 kB' 'Buffers: 2704 kB' 'Cached: 10227988 kB' 'SwapCached: 0 kB' 'Active: 7242632 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848040 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521736 kB' 'Mapped: 208936 kB' 'Shmem: 6329504 kB' 'KReclaimable: 188292 kB' 'Slab: 556296 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368004 kB' 'KernelStack: 12864 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.866 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.867 nr_hugepages=1024 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.867 resv_hugepages=0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.867 surplus_hugepages=0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.867 anon_hugepages=0 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.867 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45846124 kB' 'MemAvailable: 49347620 kB' 'Buffers: 2704 kB' 'Cached: 10228008 kB' 'SwapCached: 0 kB' 'Active: 7242984 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848392 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522116 kB' 'Mapped: 208936 kB' 'Shmem: 6329524 kB' 'KReclaimable: 188292 kB' 'Slab: 556296 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 368004 kB' 'KernelStack: 12912 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7951688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.868 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21255928 kB' 'MemUsed: 11621012 kB' 'SwapCached: 0 kB' 'Active: 5072180 kB' 'Inactive: 3264144 kB' 'Active(anon): 4883608 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034204 kB' 'Mapped: 73248 kB' 'AnonPages: 305256 kB' 'Shmem: 4581488 kB' 'KernelStack: 7704 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313300 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 198928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.869 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.870 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:39.871 node0=1024 expecting 1024 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.871 17:25:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.809 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.809 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.809 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.106 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.106 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.106 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.106 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.106 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.106 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.106 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.106 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.106 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.106 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.106 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.106 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.106 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.106 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.106 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45848272 kB' 'MemAvailable: 49349768 kB' 'Buffers: 2704 kB' 'Cached: 10228076 kB' 'SwapCached: 0 kB' 'Active: 7243636 kB' 'Inactive: 3506596 kB' 'Active(anon): 6849044 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522744 kB' 'Mapped: 208992 kB' 'Shmem: 6329592 kB' 'KReclaimable: 188292 kB' 'Slab: 556268 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 367976 kB' 'KernelStack: 12864 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.106 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.107 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45849808 kB' 'MemAvailable: 49351304 kB' 'Buffers: 2704 kB' 'Cached: 10228076 kB' 'SwapCached: 0 kB' 'Active: 7242984 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848392 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521568 kB' 'Mapped: 208880 kB' 'Shmem: 6329592 kB' 'KReclaimable: 188292 kB' 'Slab: 556208 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 367916 kB' 'KernelStack: 12832 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.108 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.109 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45849812 kB' 'MemAvailable: 49351308 kB' 'Buffers: 2704 kB' 'Cached: 10228096 kB' 'SwapCached: 0 kB' 'Active: 7243096 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522096 kB' 'Mapped: 208932 kB' 'Shmem: 6329612 kB' 'KReclaimable: 188292 kB' 'Slab: 556208 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 367916 kB' 'KernelStack: 12880 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.110 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.111 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.112 nr_hugepages=1024 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.112 resv_hugepages=0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.112 surplus_hugepages=0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.112 anon_hugepages=0 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45850436 kB' 'MemAvailable: 49351932 kB' 'Buffers: 2704 kB' 'Cached: 10228116 kB' 'SwapCached: 0 kB' 'Active: 7243016 kB' 'Inactive: 3506596 kB' 'Active(anon): 6848424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522012 kB' 'Mapped: 208932 kB' 'Shmem: 6329632 kB' 'KReclaimable: 188292 kB' 'Slab: 556260 kB' 'SReclaimable: 188292 kB' 'SUnreclaim: 367968 kB' 'KernelStack: 12880 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7950972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1818204 kB' 'DirectMap2M: 13830144 kB' 'DirectMap1G: 53477376 kB' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.112 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.113 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21255700 kB' 'MemUsed: 11621240 kB' 'SwapCached: 0 kB' 'Active: 5072468 kB' 'Inactive: 3264144 kB' 'Active(anon): 4883896 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8034212 kB' 'Mapped: 73244 kB' 'AnonPages: 305548 kB' 'Shmem: 4581496 kB' 'KernelStack: 7752 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114372 kB' 'Slab: 313312 kB' 'SReclaimable: 114372 kB' 'SUnreclaim: 198940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.114 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.115 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.116 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:41.374 node0=1024 expecting 1024 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:41.374 00:02:41.374 real 0m2.623s 00:02:41.374 user 0m1.064s 00:02:41.374 sys 0m1.478s 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.374 17:25:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.374 ************************************ 00:02:41.374 END TEST no_shrink_alloc 00:02:41.374 ************************************ 00:02:41.374 17:25:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:41.374 17:25:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:41.374 00:02:41.374 real 0m10.906s 00:02:41.374 user 0m4.289s 00:02:41.374 sys 0m5.514s 00:02:41.374 17:25:36 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.374 17:25:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.374 ************************************ 00:02:41.374 END TEST hugepages 00:02:41.374 ************************************ 00:02:41.374 17:25:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:41.374 17:25:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:41.374 17:25:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.374 17:25:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.374 17:25:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:41.374 ************************************ 00:02:41.374 START TEST driver 00:02:41.374 ************************************ 00:02:41.374 17:25:36 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:41.374 * Looking for test storage... 00:02:41.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.374 17:25:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:41.374 17:25:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.374 17:25:36 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.921 17:25:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:43.921 17:25:38 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:43.921 17:25:38 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.921 17:25:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:43.921 ************************************ 00:02:43.921 START TEST guess_driver 00:02:43.921 ************************************ 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:43.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:43.921 Looking for driver=vfio-pci 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.921 17:25:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:45.296 17:25:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.235 17:25:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.770 00:02:48.770 real 0m4.833s 00:02:48.770 user 0m1.090s 00:02:48.770 sys 0m1.882s 00:02:48.770 17:25:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.770 17:25:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:48.770 ************************************ 00:02:48.770 END TEST guess_driver 00:02:48.770 ************************************ 00:02:48.770 17:25:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:02:48.770 00:02:48.770 real 0m7.401s 00:02:48.770 user 0m1.628s 00:02:48.770 sys 0m2.914s 00:02:48.770 17:25:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.770 17:25:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:48.770 ************************************ 00:02:48.770 END TEST driver 00:02:48.770 ************************************ 00:02:48.770 17:25:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:48.770 17:25:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:48.770 17:25:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.770 17:25:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.770 17:25:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:48.770 ************************************ 00:02:48.770 START TEST devices 00:02:48.770 ************************************ 00:02:48.770 17:25:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:48.770 * Looking for test storage... 00:02:48.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.771 17:25:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:48.771 17:25:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:48.771 17:25:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.771 17:25:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.145 17:25:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:50.145 17:25:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:50.145 17:25:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:50.145 17:25:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:50.405 No valid GPT data, bailing 00:02:50.405 17:25:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:50.405 17:25:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:50.405 17:25:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:50.405 17:25:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:50.405 17:25:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:50.405 17:25:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:50.405 17:25:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:50.405 17:25:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.405 17:25:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.405 17:25:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:50.405 ************************************ 00:02:50.405 START TEST nvme_mount 00:02:50.405 ************************************ 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:50.405 17:25:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:51.343 Creating new GPT entries in memory. 00:02:51.343 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:51.343 other utilities. 00:02:51.343 17:25:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:51.343 17:25:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:51.343 17:25:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:51.343 17:25:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:51.343 17:25:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:52.279 Creating new GPT entries in memory. 00:02:52.279 The operation has completed successfully. 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2096510 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:52.279 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.538 17:25:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.474 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.732 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:53.733 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:53.733 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:53.991 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:53.991 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:53.991 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:53.991 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.991 17:25:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:54.972 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.231 17:25:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:56.604 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:56.604 00:02:56.604 real 0m6.229s 00:02:56.604 user 0m1.431s 00:02:56.604 sys 0m2.362s 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.604 17:25:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:56.604 ************************************ 00:02:56.604 END TEST nvme_mount 00:02:56.604 ************************************ 00:02:56.604 17:25:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:02:56.604 17:25:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:56.604 17:25:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.604 17:25:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.604 17:25:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:56.604 ************************************ 00:02:56.604 START TEST dm_mount 00:02:56.604 ************************************ 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:56.604 17:25:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:57.537 Creating new GPT entries in memory. 00:02:57.537 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:57.537 other utilities. 00:02:57.537 17:25:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:57.537 17:25:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:57.537 17:25:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:57.537 17:25:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:57.537 17:25:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:58.910 Creating new GPT entries in memory. 00:02:58.910 The operation has completed successfully. 00:02:58.910 17:25:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:58.910 17:25:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:58.910 17:25:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:58.910 17:25:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:58.910 17:25:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:02:59.845 The operation has completed successfully. 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2098900 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.846 17:25:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.779 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.036 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:01.036 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.037 17:25:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.972 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:02.232 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:02.232 00:03:02.232 real 0m5.646s 00:03:02.232 user 0m0.927s 00:03:02.232 sys 0m1.590s 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.232 17:25:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:02.232 ************************************ 00:03:02.232 END TEST dm_mount 00:03:02.232 ************************************ 00:03:02.232 17:25:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.232 17:25:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:02.492 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:02.492 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:02.492 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:02.492 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.492 17:25:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:02.492 00:03:02.492 real 0m13.794s 00:03:02.492 user 0m3.028s 00:03:02.492 sys 0m4.965s 00:03:02.492 17:25:57 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.492 17:25:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:02.492 ************************************ 00:03:02.492 END TEST devices 00:03:02.492 ************************************ 00:03:02.492 17:25:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:02.492 00:03:02.492 real 0m42.566s 00:03:02.492 user 0m12.111s 00:03:02.492 sys 0m18.699s 00:03:02.492 17:25:57 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.492 17:25:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.492 ************************************ 00:03:02.492 END TEST setup.sh 00:03:02.492 ************************************ 00:03:02.492 17:25:57 -- common/autotest_common.sh@1142 -- # return 0 00:03:02.492 17:25:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:03.866 Hugepages 00:03:03.866 node hugesize free / total 00:03:03.866 node0 1048576kB 0 / 0 00:03:03.866 node0 2048kB 2048 / 2048 00:03:03.866 node1 1048576kB 0 / 0 00:03:03.866 node1 2048kB 0 / 0 00:03:03.866 00:03:03.866 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.866 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:03.866 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:03.866 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:03.866 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:03.866 17:25:58 -- spdk/autotest.sh@130 -- # uname -s 00:03:03.866 17:25:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:03.866 17:25:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:03.866 17:25:58 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.797 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:04.797 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:04.797 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:04.797 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:04.797 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.057 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.057 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:05.994 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.994 17:26:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:07.370 17:26:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:07.370 17:26:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:07.370 17:26:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:07.370 17:26:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:07.370 17:26:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:07.370 17:26:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:07.370 17:26:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:07.370 17:26:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:07.371 17:26:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:07.371 17:26:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:07.371 17:26:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:07.371 17:26:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.305 Waiting for block devices as requested 00:03:08.305 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:08.305 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:08.563 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:08.563 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:08.563 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:08.822 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:08.822 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:08.822 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:08.822 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:09.080 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:09.080 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:09.080 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:09.080 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:09.339 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:09.339 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:09.339 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:09.339 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:09.598 17:26:04 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:09.598 17:26:04 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:09.598 17:26:04 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:09.598 17:26:04 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:09.598 17:26:04 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:09.598 17:26:04 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:09.598 17:26:04 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:09.598 17:26:04 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:09.598 17:26:04 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:09.598 17:26:04 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:09.598 17:26:04 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:09.598 17:26:04 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:09.598 17:26:04 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:09.598 17:26:04 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:09.598 17:26:04 -- common/autotest_common.sh@1557 -- # continue 00:03:09.598 17:26:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:09.598 17:26:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:09.598 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:03:09.598 17:26:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:09.598 17:26:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:09.598 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:03:09.598 17:26:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.015 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.015 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.015 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.952 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.952 17:26:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:11.952 17:26:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:11.952 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:03:11.952 17:26:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:11.952 17:26:06 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:11.952 17:26:06 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:11.952 17:26:06 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:11.952 17:26:06 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:11.952 17:26:06 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:11.952 17:26:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:11.952 17:26:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:11.952 17:26:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.952 17:26:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.952 17:26:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:11.952 17:26:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:11.952 17:26:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:11.952 17:26:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:11.952 17:26:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:11.952 17:26:07 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:11.952 17:26:07 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:11.952 17:26:07 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:11.952 17:26:07 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:11.952 17:26:07 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:11.952 17:26:07 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2104070 00:03:11.952 17:26:07 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:11.952 17:26:07 -- common/autotest_common.sh@1598 -- # waitforlisten 2104070 00:03:11.952 17:26:07 -- common/autotest_common.sh@829 -- # '[' -z 2104070 ']' 00:03:11.952 17:26:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:11.952 17:26:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:11.952 17:26:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:11.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:11.952 17:26:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:11.952 17:26:07 -- common/autotest_common.sh@10 -- # set +x 00:03:12.211 [2024-07-15 17:26:07.097443] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:12.211 [2024-07-15 17:26:07.097543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104070 ] 00:03:12.211 EAL: No free 2048 kB hugepages reported on node 1 00:03:12.211 [2024-07-15 17:26:07.160161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:12.211 [2024-07-15 17:26:07.276903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:13.147 17:26:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:13.148 17:26:08 -- common/autotest_common.sh@862 -- # return 0 00:03:13.148 17:26:08 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:13.148 17:26:08 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:13.148 17:26:08 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:16.437 nvme0n1 00:03:16.437 17:26:11 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:16.437 [2024-07-15 17:26:11.335357] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:16.437 [2024-07-15 17:26:11.335410] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:16.437 request: 00:03:16.437 { 00:03:16.437 "nvme_ctrlr_name": "nvme0", 00:03:16.437 "password": "test", 00:03:16.437 "method": "bdev_nvme_opal_revert", 00:03:16.437 "req_id": 1 00:03:16.437 } 00:03:16.437 Got JSON-RPC error response 00:03:16.437 response: 00:03:16.437 { 00:03:16.437 "code": -32603, 00:03:16.437 "message": "Internal error" 00:03:16.437 } 00:03:16.437 17:26:11 -- common/autotest_common.sh@1604 -- # true 00:03:16.437 17:26:11 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:16.437 17:26:11 -- common/autotest_common.sh@1608 -- # killprocess 2104070 00:03:16.437 17:26:11 -- common/autotest_common.sh@948 -- # '[' -z 2104070 ']' 00:03:16.437 17:26:11 -- common/autotest_common.sh@952 -- # kill -0 2104070 00:03:16.437 17:26:11 -- common/autotest_common.sh@953 -- # uname 00:03:16.437 17:26:11 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:16.437 17:26:11 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104070 00:03:16.437 17:26:11 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:16.437 17:26:11 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:16.437 17:26:11 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104070' 00:03:16.437 killing process with pid 2104070 00:03:16.437 17:26:11 -- common/autotest_common.sh@967 -- # kill 2104070 00:03:16.437 17:26:11 -- common/autotest_common.sh@972 -- # wait 2104070 00:03:18.344 17:26:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:18.344 17:26:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:18.344 17:26:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:18.344 17:26:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:18.344 17:26:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:18.344 17:26:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:18.344 17:26:13 -- common/autotest_common.sh@10 -- # set +x 00:03:18.344 17:26:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:18.344 17:26:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:18.344 17:26:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.344 17:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.344 17:26:13 -- common/autotest_common.sh@10 -- # set +x 00:03:18.344 ************************************ 00:03:18.344 START TEST env 00:03:18.344 ************************************ 00:03:18.344 17:26:13 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:18.344 * Looking for test storage... 00:03:18.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:18.344 17:26:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:18.344 17:26:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.344 17:26:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.344 17:26:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.344 ************************************ 00:03:18.344 START TEST env_memory 00:03:18.344 ************************************ 00:03:18.344 17:26:13 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:18.344 00:03:18.344 00:03:18.344 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.344 http://cunit.sourceforge.net/ 00:03:18.344 00:03:18.344 00:03:18.344 Suite: memory 00:03:18.344 Test: alloc and free memory map ...[2024-07-15 17:26:13.347012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:18.344 passed 00:03:18.344 Test: mem map translation ...[2024-07-15 17:26:13.366733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:18.344 [2024-07-15 17:26:13.366755] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:18.344 [2024-07-15 17:26:13.366805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:18.344 [2024-07-15 17:26:13.366817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:18.344 passed 00:03:18.344 Test: mem map registration ...[2024-07-15 17:26:13.407492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:18.344 [2024-07-15 17:26:13.407513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:18.344 passed 00:03:18.344 Test: mem map adjacent registrations ...passed 00:03:18.344 00:03:18.344 Run Summary: Type Total Ran Passed Failed Inactive 00:03:18.344 suites 1 1 n/a 0 0 00:03:18.344 tests 4 4 4 0 0 00:03:18.344 asserts 152 152 152 0 n/a 00:03:18.344 00:03:18.344 Elapsed time = 0.142 seconds 00:03:18.344 00:03:18.344 real 0m0.149s 00:03:18.344 user 0m0.141s 00:03:18.344 sys 0m0.007s 00:03:18.344 17:26:13 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.344 17:26:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:18.344 ************************************ 00:03:18.344 END TEST env_memory 00:03:18.344 ************************************ 00:03:18.604 17:26:13 env -- common/autotest_common.sh@1142 -- # return 0 00:03:18.604 17:26:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:18.604 17:26:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.604 17:26:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.604 17:26:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.604 ************************************ 00:03:18.604 START TEST env_vtophys 00:03:18.604 ************************************ 00:03:18.604 17:26:13 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:18.604 EAL: lib.eal log level changed from notice to debug 00:03:18.604 EAL: Detected lcore 0 as core 0 on socket 0 00:03:18.604 EAL: Detected lcore 1 as core 1 on socket 0 00:03:18.604 EAL: Detected lcore 2 as core 2 on socket 0 00:03:18.604 EAL: Detected lcore 3 as core 3 on socket 0 00:03:18.604 EAL: Detected lcore 4 as core 4 on socket 0 00:03:18.604 EAL: Detected lcore 5 as core 5 on socket 0 00:03:18.604 EAL: Detected lcore 6 as core 8 on socket 0 00:03:18.604 EAL: Detected lcore 7 as core 9 on socket 0 00:03:18.604 EAL: Detected lcore 8 as core 10 on socket 0 00:03:18.604 EAL: Detected lcore 9 as core 11 on socket 0 00:03:18.604 EAL: Detected lcore 10 as core 12 on socket 0 00:03:18.604 EAL: Detected lcore 11 as core 13 on socket 0 00:03:18.604 EAL: Detected lcore 12 as core 0 on socket 1 00:03:18.604 EAL: Detected lcore 13 as core 1 on socket 1 00:03:18.604 EAL: Detected lcore 14 as core 2 on socket 1 00:03:18.604 EAL: Detected lcore 15 as core 3 on socket 1 00:03:18.604 EAL: Detected lcore 16 as core 4 on socket 1 00:03:18.604 EAL: Detected lcore 17 as core 5 on socket 1 00:03:18.604 EAL: Detected lcore 18 as core 8 on socket 1 00:03:18.604 EAL: Detected lcore 19 as core 9 on socket 1 00:03:18.604 EAL: Detected lcore 20 as core 10 on socket 1 00:03:18.604 EAL: Detected lcore 21 as core 11 on socket 1 00:03:18.604 EAL: Detected lcore 22 as core 12 on socket 1 00:03:18.604 EAL: Detected lcore 23 as core 13 on socket 1 00:03:18.604 EAL: Detected lcore 24 as core 0 on socket 0 00:03:18.604 EAL: Detected lcore 25 as core 1 on socket 0 00:03:18.604 EAL: Detected lcore 26 as core 2 on socket 0 00:03:18.604 EAL: Detected lcore 27 as core 3 on socket 0 00:03:18.604 EAL: Detected lcore 28 as core 4 on socket 0 00:03:18.604 EAL: Detected lcore 29 as core 5 on socket 0 00:03:18.604 EAL: Detected lcore 30 as core 8 on socket 0 00:03:18.604 EAL: Detected lcore 31 as core 9 on socket 0 00:03:18.604 EAL: Detected lcore 32 as core 10 on socket 0 00:03:18.604 EAL: Detected lcore 33 as core 11 on socket 0 00:03:18.604 EAL: Detected lcore 34 as core 12 on socket 0 00:03:18.604 EAL: Detected lcore 35 as core 13 on socket 0 00:03:18.604 EAL: Detected lcore 36 as core 0 on socket 1 00:03:18.604 EAL: Detected lcore 37 as core 1 on socket 1 00:03:18.604 EAL: Detected lcore 38 as core 2 on socket 1 00:03:18.604 EAL: Detected lcore 39 as core 3 on socket 1 00:03:18.604 EAL: Detected lcore 40 as core 4 on socket 1 00:03:18.604 EAL: Detected lcore 41 as core 5 on socket 1 00:03:18.604 EAL: Detected lcore 42 as core 8 on socket 1 00:03:18.604 EAL: Detected lcore 43 as core 9 on socket 1 00:03:18.604 EAL: Detected lcore 44 as core 10 on socket 1 00:03:18.604 EAL: Detected lcore 45 as core 11 on socket 1 00:03:18.604 EAL: Detected lcore 46 as core 12 on socket 1 00:03:18.604 EAL: Detected lcore 47 as core 13 on socket 1 00:03:18.604 EAL: Maximum logical cores by configuration: 128 00:03:18.604 EAL: Detected CPU lcores: 48 00:03:18.604 EAL: Detected NUMA nodes: 2 00:03:18.604 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:18.604 EAL: Detected shared linkage of DPDK 00:03:18.604 EAL: No shared files mode enabled, IPC will be disabled 00:03:18.604 EAL: Bus pci wants IOVA as 'DC' 00:03:18.604 EAL: Buses did not request a specific IOVA mode. 00:03:18.604 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:18.604 EAL: Selected IOVA mode 'VA' 00:03:18.604 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.604 EAL: Probing VFIO support... 00:03:18.604 EAL: IOMMU type 1 (Type 1) is supported 00:03:18.604 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:18.604 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:18.604 EAL: VFIO support initialized 00:03:18.604 EAL: Ask a virtual area of 0x2e000 bytes 00:03:18.604 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:18.604 EAL: Setting up physically contiguous memory... 00:03:18.604 EAL: Setting maximum number of open files to 524288 00:03:18.604 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:18.604 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:18.604 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:18.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.604 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:18.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.604 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:18.604 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:18.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.604 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:18.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.604 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:18.604 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:18.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.604 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:18.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.604 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:18.604 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:18.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.604 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:18.604 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:18.604 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.604 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:18.604 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:18.604 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:18.604 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.604 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:18.604 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.605 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:18.605 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:18.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.605 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:18.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.605 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:18.605 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:18.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.605 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:18.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.605 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:18.605 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:18.605 EAL: Ask a virtual area of 0x61000 bytes 00:03:18.605 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:18.605 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:18.605 EAL: Ask a virtual area of 0x400000000 bytes 00:03:18.605 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:18.605 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:18.605 EAL: Hugepages will be freed exactly as allocated. 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: TSC frequency is ~2700000 KHz 00:03:18.605 EAL: Main lcore 0 is ready (tid=7ff9c2baea00;cpuset=[0]) 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 0 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 2MB 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:18.605 EAL: Mem event callback 'spdk:(nil)' registered 00:03:18.605 00:03:18.605 00:03:18.605 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.605 http://cunit.sourceforge.net/ 00:03:18.605 00:03:18.605 00:03:18.605 Suite: components_suite 00:03:18.605 Test: vtophys_malloc_test ...passed 00:03:18.605 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 4MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 4MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 6MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 6MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 10MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 10MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 18MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 18MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 34MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 34MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 66MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 66MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.605 EAL: Restoring previous memory policy: 4 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was expanded by 130MB 00:03:18.605 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.605 EAL: request: mp_malloc_sync 00:03:18.605 EAL: No shared files mode enabled, IPC is disabled 00:03:18.605 EAL: Heap on socket 0 was shrunk by 130MB 00:03:18.605 EAL: Trying to obtain current memory policy. 00:03:18.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:18.865 EAL: Restoring previous memory policy: 4 00:03:18.865 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.865 EAL: request: mp_malloc_sync 00:03:18.865 EAL: No shared files mode enabled, IPC is disabled 00:03:18.865 EAL: Heap on socket 0 was expanded by 258MB 00:03:18.865 EAL: Calling mem event callback 'spdk:(nil)' 00:03:18.865 EAL: request: mp_malloc_sync 00:03:18.865 EAL: No shared files mode enabled, IPC is disabled 00:03:18.865 EAL: Heap on socket 0 was shrunk by 258MB 00:03:18.865 EAL: Trying to obtain current memory policy. 00:03:18.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.124 EAL: Restoring previous memory policy: 4 00:03:19.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.124 EAL: request: mp_malloc_sync 00:03:19.124 EAL: No shared files mode enabled, IPC is disabled 00:03:19.124 EAL: Heap on socket 0 was expanded by 514MB 00:03:19.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.384 EAL: request: mp_malloc_sync 00:03:19.384 EAL: No shared files mode enabled, IPC is disabled 00:03:19.384 EAL: Heap on socket 0 was shrunk by 514MB 00:03:19.384 EAL: Trying to obtain current memory policy. 00:03:19.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:19.643 EAL: Restoring previous memory policy: 4 00:03:19.643 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.643 EAL: request: mp_malloc_sync 00:03:19.643 EAL: No shared files mode enabled, IPC is disabled 00:03:19.643 EAL: Heap on socket 0 was expanded by 1026MB 00:03:19.643 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.904 EAL: request: mp_malloc_sync 00:03:19.904 EAL: No shared files mode enabled, IPC is disabled 00:03:19.904 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:19.904 passed 00:03:19.904 00:03:19.904 Run Summary: Type Total Ran Passed Failed Inactive 00:03:19.904 suites 1 1 n/a 0 0 00:03:19.904 tests 2 2 2 0 0 00:03:19.904 asserts 497 497 497 0 n/a 00:03:19.904 00:03:19.904 Elapsed time = 1.374 seconds 00:03:19.904 EAL: Calling mem event callback 'spdk:(nil)' 00:03:19.904 EAL: request: mp_malloc_sync 00:03:19.904 EAL: No shared files mode enabled, IPC is disabled 00:03:19.904 EAL: Heap on socket 0 was shrunk by 2MB 00:03:19.904 EAL: No shared files mode enabled, IPC is disabled 00:03:19.904 EAL: No shared files mode enabled, IPC is disabled 00:03:19.904 EAL: No shared files mode enabled, IPC is disabled 00:03:19.904 00:03:19.904 real 0m1.490s 00:03:19.904 user 0m0.858s 00:03:19.904 sys 0m0.595s 00:03:19.904 17:26:14 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.904 17:26:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:19.904 ************************************ 00:03:19.904 END TEST env_vtophys 00:03:19.904 ************************************ 00:03:19.904 17:26:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:19.904 17:26:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:19.904 17:26:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.904 17:26:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.904 17:26:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:19.904 ************************************ 00:03:19.904 START TEST env_pci 00:03:19.904 ************************************ 00:03:19.904 17:26:15 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:20.163 00:03:20.163 00:03:20.163 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.163 http://cunit.sourceforge.net/ 00:03:20.163 00:03:20.163 00:03:20.163 Suite: pci 00:03:20.163 Test: pci_hook ...[2024-07-15 17:26:15.047568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2105085 has claimed it 00:03:20.163 EAL: Cannot find device (10000:00:01.0) 00:03:20.163 EAL: Failed to attach device on primary process 00:03:20.163 passed 00:03:20.163 00:03:20.163 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.163 suites 1 1 n/a 0 0 00:03:20.163 tests 1 1 1 0 0 00:03:20.163 asserts 25 25 25 0 n/a 00:03:20.163 00:03:20.163 Elapsed time = 0.022 seconds 00:03:20.163 00:03:20.163 real 0m0.033s 00:03:20.163 user 0m0.015s 00:03:20.163 sys 0m0.018s 00:03:20.163 17:26:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.163 17:26:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:20.163 ************************************ 00:03:20.163 END TEST env_pci 00:03:20.163 ************************************ 00:03:20.163 17:26:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:20.163 17:26:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:20.163 17:26:15 env -- env/env.sh@15 -- # uname 00:03:20.163 17:26:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:20.163 17:26:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:20.163 17:26:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:20.163 17:26:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:20.163 17:26:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.163 17:26:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.163 ************************************ 00:03:20.163 START TEST env_dpdk_post_init 00:03:20.163 ************************************ 00:03:20.163 17:26:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:20.163 EAL: Detected CPU lcores: 48 00:03:20.163 EAL: Detected NUMA nodes: 2 00:03:20.163 EAL: Detected shared linkage of DPDK 00:03:20.163 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:20.163 EAL: Selected IOVA mode 'VA' 00:03:20.163 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.163 EAL: VFIO support initialized 00:03:20.163 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:20.163 EAL: Using IOMMU type 1 (Type 1) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:20.163 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:20.423 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:21.362 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:24.653 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:24.653 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:24.653 Starting DPDK initialization... 00:03:24.653 Starting SPDK post initialization... 00:03:24.653 SPDK NVMe probe 00:03:24.653 Attaching to 0000:88:00.0 00:03:24.653 Attached to 0000:88:00.0 00:03:24.653 Cleaning up... 00:03:24.653 00:03:24.653 real 0m4.383s 00:03:24.653 user 0m3.276s 00:03:24.653 sys 0m0.163s 00:03:24.653 17:26:19 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.653 17:26:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:24.653 ************************************ 00:03:24.653 END TEST env_dpdk_post_init 00:03:24.653 ************************************ 00:03:24.653 17:26:19 env -- common/autotest_common.sh@1142 -- # return 0 00:03:24.653 17:26:19 env -- env/env.sh@26 -- # uname 00:03:24.653 17:26:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:24.653 17:26:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.653 17:26:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.653 17:26:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.653 17:26:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.653 ************************************ 00:03:24.653 START TEST env_mem_callbacks 00:03:24.653 ************************************ 00:03:24.653 17:26:19 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:24.653 EAL: Detected CPU lcores: 48 00:03:24.653 EAL: Detected NUMA nodes: 2 00:03:24.653 EAL: Detected shared linkage of DPDK 00:03:24.653 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:24.653 EAL: Selected IOVA mode 'VA' 00:03:24.653 EAL: No free 2048 kB hugepages reported on node 1 00:03:24.653 EAL: VFIO support initialized 00:03:24.653 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:24.653 00:03:24.653 00:03:24.653 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.653 http://cunit.sourceforge.net/ 00:03:24.653 00:03:24.653 00:03:24.653 Suite: memory 00:03:24.654 Test: test ... 00:03:24.654 register 0x200000200000 2097152 00:03:24.654 malloc 3145728 00:03:24.654 register 0x200000400000 4194304 00:03:24.654 buf 0x200000500000 len 3145728 PASSED 00:03:24.654 malloc 64 00:03:24.654 buf 0x2000004fff40 len 64 PASSED 00:03:24.654 malloc 4194304 00:03:24.654 register 0x200000800000 6291456 00:03:24.654 buf 0x200000a00000 len 4194304 PASSED 00:03:24.654 free 0x200000500000 3145728 00:03:24.654 free 0x2000004fff40 64 00:03:24.654 unregister 0x200000400000 4194304 PASSED 00:03:24.654 free 0x200000a00000 4194304 00:03:24.654 unregister 0x200000800000 6291456 PASSED 00:03:24.654 malloc 8388608 00:03:24.654 register 0x200000400000 10485760 00:03:24.654 buf 0x200000600000 len 8388608 PASSED 00:03:24.654 free 0x200000600000 8388608 00:03:24.654 unregister 0x200000400000 10485760 PASSED 00:03:24.654 passed 00:03:24.654 00:03:24.654 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.654 suites 1 1 n/a 0 0 00:03:24.654 tests 1 1 1 0 0 00:03:24.654 asserts 15 15 15 0 n/a 00:03:24.654 00:03:24.654 Elapsed time = 0.005 seconds 00:03:24.654 00:03:24.654 real 0m0.048s 00:03:24.654 user 0m0.012s 00:03:24.654 sys 0m0.036s 00:03:24.654 17:26:19 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.654 17:26:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:24.654 ************************************ 00:03:24.654 END TEST env_mem_callbacks 00:03:24.654 ************************************ 00:03:24.654 17:26:19 env -- common/autotest_common.sh@1142 -- # return 0 00:03:24.654 00:03:24.654 real 0m6.376s 00:03:24.654 user 0m4.415s 00:03:24.654 sys 0m0.999s 00:03:24.654 17:26:19 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.654 17:26:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.654 ************************************ 00:03:24.654 END TEST env 00:03:24.654 ************************************ 00:03:24.654 17:26:19 -- common/autotest_common.sh@1142 -- # return 0 00:03:24.654 17:26:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.654 17:26:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.654 17:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.654 17:26:19 -- common/autotest_common.sh@10 -- # set +x 00:03:24.654 ************************************ 00:03:24.654 START TEST rpc 00:03:24.654 ************************************ 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:24.654 * Looking for test storage... 00:03:24.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.654 17:26:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2105746 00:03:24.654 17:26:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:24.654 17:26:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:24.654 17:26:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2105746 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@829 -- # '[' -z 2105746 ']' 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:24.654 17:26:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.654 [2024-07-15 17:26:19.749850] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:24.654 [2024-07-15 17:26:19.749964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105746 ] 00:03:24.654 EAL: No free 2048 kB hugepages reported on node 1 00:03:24.913 [2024-07-15 17:26:19.807395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.913 [2024-07-15 17:26:19.912378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:24.913 [2024-07-15 17:26:19.912449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2105746' to capture a snapshot of events at runtime. 00:03:24.913 [2024-07-15 17:26:19.912471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:24.913 [2024-07-15 17:26:19.912482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:24.913 [2024-07-15 17:26:19.912492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2105746 for offline analysis/debug. 00:03:24.913 [2024-07-15 17:26:19.912524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.171 17:26:20 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:25.171 17:26:20 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:25.171 17:26:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.171 17:26:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.171 17:26:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:25.171 17:26:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:25.171 17:26:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.171 17:26:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.171 17:26:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.171 ************************************ 00:03:25.171 START TEST rpc_integrity 00:03:25.171 ************************************ 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.171 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:25.171 { 00:03:25.171 "name": "Malloc0", 00:03:25.171 "aliases": [ 00:03:25.171 "0cbc73a7-011b-4ad0-97d8-58fd3015abf6" 00:03:25.171 ], 00:03:25.171 "product_name": "Malloc disk", 00:03:25.171 "block_size": 512, 00:03:25.171 "num_blocks": 16384, 00:03:25.171 "uuid": "0cbc73a7-011b-4ad0-97d8-58fd3015abf6", 00:03:25.171 "assigned_rate_limits": { 00:03:25.171 "rw_ios_per_sec": 0, 00:03:25.171 "rw_mbytes_per_sec": 0, 00:03:25.171 "r_mbytes_per_sec": 0, 00:03:25.171 "w_mbytes_per_sec": 0 00:03:25.171 }, 00:03:25.171 "claimed": false, 00:03:25.171 "zoned": false, 00:03:25.171 "supported_io_types": { 00:03:25.171 "read": true, 00:03:25.171 "write": true, 00:03:25.171 "unmap": true, 00:03:25.171 "flush": true, 00:03:25.171 "reset": true, 00:03:25.171 "nvme_admin": false, 00:03:25.171 "nvme_io": false, 00:03:25.171 "nvme_io_md": false, 00:03:25.171 "write_zeroes": true, 00:03:25.171 "zcopy": true, 00:03:25.171 "get_zone_info": false, 00:03:25.171 "zone_management": false, 00:03:25.171 "zone_append": false, 00:03:25.171 "compare": false, 00:03:25.171 "compare_and_write": false, 00:03:25.171 "abort": true, 00:03:25.171 "seek_hole": false, 00:03:25.171 "seek_data": false, 00:03:25.171 "copy": true, 00:03:25.171 "nvme_iov_md": false 00:03:25.171 }, 00:03:25.171 "memory_domains": [ 00:03:25.171 { 00:03:25.171 "dma_device_id": "system", 00:03:25.171 "dma_device_type": 1 00:03:25.171 }, 00:03:25.171 { 00:03:25.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.171 "dma_device_type": 2 00:03:25.171 } 00:03:25.171 ], 00:03:25.171 "driver_specific": {} 00:03:25.171 } 00:03:25.171 ]' 00:03:25.171 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 [2024-07-15 17:26:20.321376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:25.430 [2024-07-15 17:26:20.321421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:25.430 [2024-07-15 17:26:20.321445] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaead50 00:03:25.430 [2024-07-15 17:26:20.321461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:25.430 [2024-07-15 17:26:20.322991] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:25.430 [2024-07-15 17:26:20.323016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:25.430 Passthru0 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:25.430 { 00:03:25.430 "name": "Malloc0", 00:03:25.430 "aliases": [ 00:03:25.430 "0cbc73a7-011b-4ad0-97d8-58fd3015abf6" 00:03:25.430 ], 00:03:25.430 "product_name": "Malloc disk", 00:03:25.430 "block_size": 512, 00:03:25.430 "num_blocks": 16384, 00:03:25.430 "uuid": "0cbc73a7-011b-4ad0-97d8-58fd3015abf6", 00:03:25.430 "assigned_rate_limits": { 00:03:25.430 "rw_ios_per_sec": 0, 00:03:25.430 "rw_mbytes_per_sec": 0, 00:03:25.430 "r_mbytes_per_sec": 0, 00:03:25.430 "w_mbytes_per_sec": 0 00:03:25.430 }, 00:03:25.430 "claimed": true, 00:03:25.430 "claim_type": "exclusive_write", 00:03:25.430 "zoned": false, 00:03:25.430 "supported_io_types": { 00:03:25.430 "read": true, 00:03:25.430 "write": true, 00:03:25.430 "unmap": true, 00:03:25.430 "flush": true, 00:03:25.430 "reset": true, 00:03:25.430 "nvme_admin": false, 00:03:25.430 "nvme_io": false, 00:03:25.430 "nvme_io_md": false, 00:03:25.430 "write_zeroes": true, 00:03:25.430 "zcopy": true, 00:03:25.430 "get_zone_info": false, 00:03:25.430 "zone_management": false, 00:03:25.430 "zone_append": false, 00:03:25.430 "compare": false, 00:03:25.430 "compare_and_write": false, 00:03:25.430 "abort": true, 00:03:25.430 "seek_hole": false, 00:03:25.430 "seek_data": false, 00:03:25.430 "copy": true, 00:03:25.430 "nvme_iov_md": false 00:03:25.430 }, 00:03:25.430 "memory_domains": [ 00:03:25.430 { 00:03:25.430 "dma_device_id": "system", 00:03:25.430 "dma_device_type": 1 00:03:25.430 }, 00:03:25.430 { 00:03:25.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.430 "dma_device_type": 2 00:03:25.430 } 00:03:25.430 ], 00:03:25.430 "driver_specific": {} 00:03:25.430 }, 00:03:25.430 { 00:03:25.430 "name": "Passthru0", 00:03:25.430 "aliases": [ 00:03:25.430 "63560311-0841-53ef-92b7-6f589762bccf" 00:03:25.430 ], 00:03:25.430 "product_name": "passthru", 00:03:25.430 "block_size": 512, 00:03:25.430 "num_blocks": 16384, 00:03:25.430 "uuid": "63560311-0841-53ef-92b7-6f589762bccf", 00:03:25.430 "assigned_rate_limits": { 00:03:25.430 "rw_ios_per_sec": 0, 00:03:25.430 "rw_mbytes_per_sec": 0, 00:03:25.430 "r_mbytes_per_sec": 0, 00:03:25.430 "w_mbytes_per_sec": 0 00:03:25.430 }, 00:03:25.430 "claimed": false, 00:03:25.430 "zoned": false, 00:03:25.430 "supported_io_types": { 00:03:25.430 "read": true, 00:03:25.430 "write": true, 00:03:25.430 "unmap": true, 00:03:25.430 "flush": true, 00:03:25.430 "reset": true, 00:03:25.430 "nvme_admin": false, 00:03:25.430 "nvme_io": false, 00:03:25.430 "nvme_io_md": false, 00:03:25.430 "write_zeroes": true, 00:03:25.430 "zcopy": true, 00:03:25.430 "get_zone_info": false, 00:03:25.430 "zone_management": false, 00:03:25.430 "zone_append": false, 00:03:25.430 "compare": false, 00:03:25.430 "compare_and_write": false, 00:03:25.430 "abort": true, 00:03:25.430 "seek_hole": false, 00:03:25.430 "seek_data": false, 00:03:25.430 "copy": true, 00:03:25.430 "nvme_iov_md": false 00:03:25.430 }, 00:03:25.430 "memory_domains": [ 00:03:25.430 { 00:03:25.430 "dma_device_id": "system", 00:03:25.430 "dma_device_type": 1 00:03:25.430 }, 00:03:25.430 { 00:03:25.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.430 "dma_device_type": 2 00:03:25.430 } 00:03:25.430 ], 00:03:25.430 "driver_specific": { 00:03:25.430 "passthru": { 00:03:25.430 "name": "Passthru0", 00:03:25.430 "base_bdev_name": "Malloc0" 00:03:25.430 } 00:03:25.430 } 00:03:25.430 } 00:03:25.430 ]' 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:25.430 17:26:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:25.430 00:03:25.430 real 0m0.235s 00:03:25.430 user 0m0.155s 00:03:25.430 sys 0m0.022s 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.430 17:26:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.430 ************************************ 00:03:25.430 END TEST rpc_integrity 00:03:25.430 ************************************ 00:03:25.430 17:26:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:25.430 17:26:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:25.430 17:26:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.431 17:26:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.431 17:26:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 ************************************ 00:03:25.431 START TEST rpc_plugins 00:03:25.431 ************************************ 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:25.431 { 00:03:25.431 "name": "Malloc1", 00:03:25.431 "aliases": [ 00:03:25.431 "58f9230b-da3f-42bb-a8af-28f21b8db232" 00:03:25.431 ], 00:03:25.431 "product_name": "Malloc disk", 00:03:25.431 "block_size": 4096, 00:03:25.431 "num_blocks": 256, 00:03:25.431 "uuid": "58f9230b-da3f-42bb-a8af-28f21b8db232", 00:03:25.431 "assigned_rate_limits": { 00:03:25.431 "rw_ios_per_sec": 0, 00:03:25.431 "rw_mbytes_per_sec": 0, 00:03:25.431 "r_mbytes_per_sec": 0, 00:03:25.431 "w_mbytes_per_sec": 0 00:03:25.431 }, 00:03:25.431 "claimed": false, 00:03:25.431 "zoned": false, 00:03:25.431 "supported_io_types": { 00:03:25.431 "read": true, 00:03:25.431 "write": true, 00:03:25.431 "unmap": true, 00:03:25.431 "flush": true, 00:03:25.431 "reset": true, 00:03:25.431 "nvme_admin": false, 00:03:25.431 "nvme_io": false, 00:03:25.431 "nvme_io_md": false, 00:03:25.431 "write_zeroes": true, 00:03:25.431 "zcopy": true, 00:03:25.431 "get_zone_info": false, 00:03:25.431 "zone_management": false, 00:03:25.431 "zone_append": false, 00:03:25.431 "compare": false, 00:03:25.431 "compare_and_write": false, 00:03:25.431 "abort": true, 00:03:25.431 "seek_hole": false, 00:03:25.431 "seek_data": false, 00:03:25.431 "copy": true, 00:03:25.431 "nvme_iov_md": false 00:03:25.431 }, 00:03:25.431 "memory_domains": [ 00:03:25.431 { 00:03:25.431 "dma_device_id": "system", 00:03:25.431 "dma_device_type": 1 00:03:25.431 }, 00:03:25.431 { 00:03:25.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.431 "dma_device_type": 2 00:03:25.431 } 00:03:25.431 ], 00:03:25.431 "driver_specific": {} 00:03:25.431 } 00:03:25.431 ]' 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.431 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.431 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.689 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.689 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:25.689 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:25.689 17:26:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:25.689 00:03:25.689 real 0m0.116s 00:03:25.689 user 0m0.076s 00:03:25.689 sys 0m0.010s 00:03:25.689 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.689 17:26:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:25.689 ************************************ 00:03:25.689 END TEST rpc_plugins 00:03:25.689 ************************************ 00:03:25.689 17:26:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:25.689 17:26:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:25.689 17:26:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.689 17:26:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.689 17:26:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.689 ************************************ 00:03:25.689 START TEST rpc_trace_cmd_test 00:03:25.689 ************************************ 00:03:25.689 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:25.690 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2105746", 00:03:25.690 "tpoint_group_mask": "0x8", 00:03:25.690 "iscsi_conn": { 00:03:25.690 "mask": "0x2", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "scsi": { 00:03:25.690 "mask": "0x4", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "bdev": { 00:03:25.690 "mask": "0x8", 00:03:25.690 "tpoint_mask": "0xffffffffffffffff" 00:03:25.690 }, 00:03:25.690 "nvmf_rdma": { 00:03:25.690 "mask": "0x10", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "nvmf_tcp": { 00:03:25.690 "mask": "0x20", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "ftl": { 00:03:25.690 "mask": "0x40", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "blobfs": { 00:03:25.690 "mask": "0x80", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "dsa": { 00:03:25.690 "mask": "0x200", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "thread": { 00:03:25.690 "mask": "0x400", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "nvme_pcie": { 00:03:25.690 "mask": "0x800", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "iaa": { 00:03:25.690 "mask": "0x1000", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "nvme_tcp": { 00:03:25.690 "mask": "0x2000", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "bdev_nvme": { 00:03:25.690 "mask": "0x4000", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 }, 00:03:25.690 "sock": { 00:03:25.690 "mask": "0x8000", 00:03:25.690 "tpoint_mask": "0x0" 00:03:25.690 } 00:03:25.690 }' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:25.690 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:25.948 17:26:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:25.948 00:03:25.948 real 0m0.197s 00:03:25.948 user 0m0.178s 00:03:25.948 sys 0m0.013s 00:03:25.948 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.948 17:26:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:25.948 ************************************ 00:03:25.948 END TEST rpc_trace_cmd_test 00:03:25.948 ************************************ 00:03:25.948 17:26:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:25.948 17:26:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:25.948 17:26:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:25.948 17:26:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:25.948 17:26:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.948 17:26:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.948 17:26:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.948 ************************************ 00:03:25.948 START TEST rpc_daemon_integrity 00:03:25.948 ************************************ 00:03:25.948 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:25.948 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:25.948 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.948 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.948 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:25.949 { 00:03:25.949 "name": "Malloc2", 00:03:25.949 "aliases": [ 00:03:25.949 "085fb0df-29b6-43ba-954d-23a643d66e3b" 00:03:25.949 ], 00:03:25.949 "product_name": "Malloc disk", 00:03:25.949 "block_size": 512, 00:03:25.949 "num_blocks": 16384, 00:03:25.949 "uuid": "085fb0df-29b6-43ba-954d-23a643d66e3b", 00:03:25.949 "assigned_rate_limits": { 00:03:25.949 "rw_ios_per_sec": 0, 00:03:25.949 "rw_mbytes_per_sec": 0, 00:03:25.949 "r_mbytes_per_sec": 0, 00:03:25.949 "w_mbytes_per_sec": 0 00:03:25.949 }, 00:03:25.949 "claimed": false, 00:03:25.949 "zoned": false, 00:03:25.949 "supported_io_types": { 00:03:25.949 "read": true, 00:03:25.949 "write": true, 00:03:25.949 "unmap": true, 00:03:25.949 "flush": true, 00:03:25.949 "reset": true, 00:03:25.949 "nvme_admin": false, 00:03:25.949 "nvme_io": false, 00:03:25.949 "nvme_io_md": false, 00:03:25.949 "write_zeroes": true, 00:03:25.949 "zcopy": true, 00:03:25.949 "get_zone_info": false, 00:03:25.949 "zone_management": false, 00:03:25.949 "zone_append": false, 00:03:25.949 "compare": false, 00:03:25.949 "compare_and_write": false, 00:03:25.949 "abort": true, 00:03:25.949 "seek_hole": false, 00:03:25.949 "seek_data": false, 00:03:25.949 "copy": true, 00:03:25.949 "nvme_iov_md": false 00:03:25.949 }, 00:03:25.949 "memory_domains": [ 00:03:25.949 { 00:03:25.949 "dma_device_id": "system", 00:03:25.949 "dma_device_type": 1 00:03:25.949 }, 00:03:25.949 { 00:03:25.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.949 "dma_device_type": 2 00:03:25.949 } 00:03:25.949 ], 00:03:25.949 "driver_specific": {} 00:03:25.949 } 00:03:25.949 ]' 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.949 17:26:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.949 [2024-07-15 17:26:21.003405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:25.949 [2024-07-15 17:26:21.003449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:25.949 [2024-07-15 17:26:21.003477] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaea980 00:03:25.949 [2024-07-15 17:26:21.003493] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:25.949 [2024-07-15 17:26:21.004825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:25.949 [2024-07-15 17:26:21.004855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:25.949 Passthru0 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.949 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:25.949 { 00:03:25.949 "name": "Malloc2", 00:03:25.949 "aliases": [ 00:03:25.949 "085fb0df-29b6-43ba-954d-23a643d66e3b" 00:03:25.949 ], 00:03:25.949 "product_name": "Malloc disk", 00:03:25.949 "block_size": 512, 00:03:25.949 "num_blocks": 16384, 00:03:25.949 "uuid": "085fb0df-29b6-43ba-954d-23a643d66e3b", 00:03:25.949 "assigned_rate_limits": { 00:03:25.949 "rw_ios_per_sec": 0, 00:03:25.949 "rw_mbytes_per_sec": 0, 00:03:25.949 "r_mbytes_per_sec": 0, 00:03:25.949 "w_mbytes_per_sec": 0 00:03:25.949 }, 00:03:25.949 "claimed": true, 00:03:25.949 "claim_type": "exclusive_write", 00:03:25.949 "zoned": false, 00:03:25.949 "supported_io_types": { 00:03:25.949 "read": true, 00:03:25.949 "write": true, 00:03:25.949 "unmap": true, 00:03:25.949 "flush": true, 00:03:25.949 "reset": true, 00:03:25.949 "nvme_admin": false, 00:03:25.949 "nvme_io": false, 00:03:25.949 "nvme_io_md": false, 00:03:25.949 "write_zeroes": true, 00:03:25.949 "zcopy": true, 00:03:25.949 "get_zone_info": false, 00:03:25.949 "zone_management": false, 00:03:25.949 "zone_append": false, 00:03:25.949 "compare": false, 00:03:25.949 "compare_and_write": false, 00:03:25.949 "abort": true, 00:03:25.949 "seek_hole": false, 00:03:25.949 "seek_data": false, 00:03:25.949 "copy": true, 00:03:25.949 "nvme_iov_md": false 00:03:25.949 }, 00:03:25.949 "memory_domains": [ 00:03:25.949 { 00:03:25.949 "dma_device_id": "system", 00:03:25.949 "dma_device_type": 1 00:03:25.949 }, 00:03:25.949 { 00:03:25.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.949 "dma_device_type": 2 00:03:25.949 } 00:03:25.949 ], 00:03:25.949 "driver_specific": {} 00:03:25.949 }, 00:03:25.949 { 00:03:25.949 "name": "Passthru0", 00:03:25.949 "aliases": [ 00:03:25.949 "16b3fdb2-fafd-5117-ad51-fdc0108cffba" 00:03:25.949 ], 00:03:25.949 "product_name": "passthru", 00:03:25.949 "block_size": 512, 00:03:25.949 "num_blocks": 16384, 00:03:25.949 "uuid": "16b3fdb2-fafd-5117-ad51-fdc0108cffba", 00:03:25.949 "assigned_rate_limits": { 00:03:25.949 "rw_ios_per_sec": 0, 00:03:25.949 "rw_mbytes_per_sec": 0, 00:03:25.949 "r_mbytes_per_sec": 0, 00:03:25.949 "w_mbytes_per_sec": 0 00:03:25.949 }, 00:03:25.949 "claimed": false, 00:03:25.949 "zoned": false, 00:03:25.949 "supported_io_types": { 00:03:25.949 "read": true, 00:03:25.949 "write": true, 00:03:25.949 "unmap": true, 00:03:25.949 "flush": true, 00:03:25.949 "reset": true, 00:03:25.949 "nvme_admin": false, 00:03:25.949 "nvme_io": false, 00:03:25.949 "nvme_io_md": false, 00:03:25.949 "write_zeroes": true, 00:03:25.949 "zcopy": true, 00:03:25.949 "get_zone_info": false, 00:03:25.949 "zone_management": false, 00:03:25.949 "zone_append": false, 00:03:25.949 "compare": false, 00:03:25.950 "compare_and_write": false, 00:03:25.950 "abort": true, 00:03:25.950 "seek_hole": false, 00:03:25.950 "seek_data": false, 00:03:25.950 "copy": true, 00:03:25.950 "nvme_iov_md": false 00:03:25.950 }, 00:03:25.950 "memory_domains": [ 00:03:25.950 { 00:03:25.950 "dma_device_id": "system", 00:03:25.950 "dma_device_type": 1 00:03:25.950 }, 00:03:25.950 { 00:03:25.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:25.950 "dma_device_type": 2 00:03:25.950 } 00:03:25.950 ], 00:03:25.950 "driver_specific": { 00:03:25.950 "passthru": { 00:03:25.950 "name": "Passthru0", 00:03:25.950 "base_bdev_name": "Malloc2" 00:03:25.950 } 00:03:25.950 } 00:03:25.950 } 00:03:25.950 ]' 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:25.950 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:26.208 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:26.208 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:26.208 17:26:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:26.208 00:03:26.208 real 0m0.226s 00:03:26.208 user 0m0.146s 00:03:26.208 sys 0m0.023s 00:03:26.208 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.208 17:26:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:26.208 ************************************ 00:03:26.208 END TEST rpc_daemon_integrity 00:03:26.208 ************************************ 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:26.208 17:26:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:26.208 17:26:21 rpc -- rpc/rpc.sh@84 -- # killprocess 2105746 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@948 -- # '[' -z 2105746 ']' 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@952 -- # kill -0 2105746 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@953 -- # uname 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2105746 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2105746' 00:03:26.208 killing process with pid 2105746 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@967 -- # kill 2105746 00:03:26.208 17:26:21 rpc -- common/autotest_common.sh@972 -- # wait 2105746 00:03:26.775 00:03:26.775 real 0m1.977s 00:03:26.775 user 0m2.469s 00:03:26.775 sys 0m0.575s 00:03:26.775 17:26:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.775 17:26:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.775 ************************************ 00:03:26.775 END TEST rpc 00:03:26.775 ************************************ 00:03:26.775 17:26:21 -- common/autotest_common.sh@1142 -- # return 0 00:03:26.775 17:26:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:26.775 17:26:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.775 17:26:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.775 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:03:26.775 ************************************ 00:03:26.775 START TEST skip_rpc 00:03:26.775 ************************************ 00:03:26.775 17:26:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:26.775 * Looking for test storage... 00:03:26.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.775 17:26:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:26.775 17:26:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:26.775 17:26:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:26.775 17:26:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.775 17:26:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.775 17:26:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.775 ************************************ 00:03:26.775 START TEST skip_rpc 00:03:26.775 ************************************ 00:03:26.775 17:26:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:26.775 17:26:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2106178 00:03:26.775 17:26:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:26.775 17:26:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.775 17:26:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:26.775 [2024-07-15 17:26:21.810253] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:26.775 [2024-07-15 17:26:21.810331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106178 ] 00:03:26.775 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.775 [2024-07-15 17:26:21.867288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.033 [2024-07-15 17:26:21.978244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2106178 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2106178 ']' 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2106178 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2106178 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2106178' 00:03:32.325 killing process with pid 2106178 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2106178 00:03:32.325 17:26:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2106178 00:03:32.325 00:03:32.325 real 0m5.504s 00:03:32.325 user 0m5.185s 00:03:32.325 sys 0m0.326s 00:03:32.325 17:26:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.325 17:26:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.325 ************************************ 00:03:32.325 END TEST skip_rpc 00:03:32.325 ************************************ 00:03:32.325 17:26:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:32.325 17:26:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:32.325 17:26:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.325 17:26:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.325 17:26:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.325 ************************************ 00:03:32.325 START TEST skip_rpc_with_json 00:03:32.325 ************************************ 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2106871 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2106871 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2106871 ']' 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:32.325 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.325 [2024-07-15 17:26:27.354097] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:32.325 [2024-07-15 17:26:27.354181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106871 ] 00:03:32.325 EAL: No free 2048 kB hugepages reported on node 1 00:03:32.325 [2024-07-15 17:26:27.415767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.584 [2024-07-15 17:26:27.536412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.842 [2024-07-15 17:26:27.806141] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:32.842 request: 00:03:32.842 { 00:03:32.842 "trtype": "tcp", 00:03:32.842 "method": "nvmf_get_transports", 00:03:32.842 "req_id": 1 00:03:32.842 } 00:03:32.842 Got JSON-RPC error response 00:03:32.842 response: 00:03:32.842 { 00:03:32.842 "code": -19, 00:03:32.842 "message": "No such device" 00:03:32.842 } 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.842 [2024-07-15 17:26:27.814266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.842 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:32.843 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.843 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.843 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.843 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:32.843 { 00:03:32.843 "subsystems": [ 00:03:32.843 { 00:03:32.843 "subsystem": "vfio_user_target", 00:03:32.843 "config": null 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "keyring", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "iobuf", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "iobuf_set_options", 00:03:32.843 "params": { 00:03:32.843 "small_pool_count": 8192, 00:03:32.843 "large_pool_count": 1024, 00:03:32.843 "small_bufsize": 8192, 00:03:32.843 "large_bufsize": 135168 00:03:32.843 } 00:03:32.843 } 00:03:32.843 ] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "sock", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "sock_set_default_impl", 00:03:32.843 "params": { 00:03:32.843 "impl_name": "posix" 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "sock_impl_set_options", 00:03:32.843 "params": { 00:03:32.843 "impl_name": "ssl", 00:03:32.843 "recv_buf_size": 4096, 00:03:32.843 "send_buf_size": 4096, 00:03:32.843 "enable_recv_pipe": true, 00:03:32.843 "enable_quickack": false, 00:03:32.843 "enable_placement_id": 0, 00:03:32.843 "enable_zerocopy_send_server": true, 00:03:32.843 "enable_zerocopy_send_client": false, 00:03:32.843 "zerocopy_threshold": 0, 00:03:32.843 "tls_version": 0, 00:03:32.843 "enable_ktls": false 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "sock_impl_set_options", 00:03:32.843 "params": { 00:03:32.843 "impl_name": "posix", 00:03:32.843 "recv_buf_size": 2097152, 00:03:32.843 "send_buf_size": 2097152, 00:03:32.843 "enable_recv_pipe": true, 00:03:32.843 "enable_quickack": false, 00:03:32.843 "enable_placement_id": 0, 00:03:32.843 "enable_zerocopy_send_server": true, 00:03:32.843 "enable_zerocopy_send_client": false, 00:03:32.843 "zerocopy_threshold": 0, 00:03:32.843 "tls_version": 0, 00:03:32.843 "enable_ktls": false 00:03:32.843 } 00:03:32.843 } 00:03:32.843 ] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "vmd", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "accel", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "accel_set_options", 00:03:32.843 "params": { 00:03:32.843 "small_cache_size": 128, 00:03:32.843 "large_cache_size": 16, 00:03:32.843 "task_count": 2048, 00:03:32.843 "sequence_count": 2048, 00:03:32.843 "buf_count": 2048 00:03:32.843 } 00:03:32.843 } 00:03:32.843 ] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "bdev", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "bdev_set_options", 00:03:32.843 "params": { 00:03:32.843 "bdev_io_pool_size": 65535, 00:03:32.843 "bdev_io_cache_size": 256, 00:03:32.843 "bdev_auto_examine": true, 00:03:32.843 "iobuf_small_cache_size": 128, 00:03:32.843 "iobuf_large_cache_size": 16 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "bdev_raid_set_options", 00:03:32.843 "params": { 00:03:32.843 "process_window_size_kb": 1024 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "bdev_iscsi_set_options", 00:03:32.843 "params": { 00:03:32.843 "timeout_sec": 30 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "bdev_nvme_set_options", 00:03:32.843 "params": { 00:03:32.843 "action_on_timeout": "none", 00:03:32.843 "timeout_us": 0, 00:03:32.843 "timeout_admin_us": 0, 00:03:32.843 "keep_alive_timeout_ms": 10000, 00:03:32.843 "arbitration_burst": 0, 00:03:32.843 "low_priority_weight": 0, 00:03:32.843 "medium_priority_weight": 0, 00:03:32.843 "high_priority_weight": 0, 00:03:32.843 "nvme_adminq_poll_period_us": 10000, 00:03:32.843 "nvme_ioq_poll_period_us": 0, 00:03:32.843 "io_queue_requests": 0, 00:03:32.843 "delay_cmd_submit": true, 00:03:32.843 "transport_retry_count": 4, 00:03:32.843 "bdev_retry_count": 3, 00:03:32.843 "transport_ack_timeout": 0, 00:03:32.843 "ctrlr_loss_timeout_sec": 0, 00:03:32.843 "reconnect_delay_sec": 0, 00:03:32.843 "fast_io_fail_timeout_sec": 0, 00:03:32.843 "disable_auto_failback": false, 00:03:32.843 "generate_uuids": false, 00:03:32.843 "transport_tos": 0, 00:03:32.843 "nvme_error_stat": false, 00:03:32.843 "rdma_srq_size": 0, 00:03:32.843 "io_path_stat": false, 00:03:32.843 "allow_accel_sequence": false, 00:03:32.843 "rdma_max_cq_size": 0, 00:03:32.843 "rdma_cm_event_timeout_ms": 0, 00:03:32.843 "dhchap_digests": [ 00:03:32.843 "sha256", 00:03:32.843 "sha384", 00:03:32.843 "sha512" 00:03:32.843 ], 00:03:32.843 "dhchap_dhgroups": [ 00:03:32.843 "null", 00:03:32.843 "ffdhe2048", 00:03:32.843 "ffdhe3072", 00:03:32.843 "ffdhe4096", 00:03:32.843 "ffdhe6144", 00:03:32.843 "ffdhe8192" 00:03:32.843 ] 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "bdev_nvme_set_hotplug", 00:03:32.843 "params": { 00:03:32.843 "period_us": 100000, 00:03:32.843 "enable": false 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "bdev_wait_for_examine" 00:03:32.843 } 00:03:32.843 ] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "scsi", 00:03:32.843 "config": null 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "scheduler", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "framework_set_scheduler", 00:03:32.843 "params": { 00:03:32.843 "name": "static" 00:03:32.843 } 00:03:32.843 } 00:03:32.843 ] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "vhost_scsi", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "vhost_blk", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "ublk", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "nbd", 00:03:32.843 "config": [] 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "subsystem": "nvmf", 00:03:32.843 "config": [ 00:03:32.843 { 00:03:32.843 "method": "nvmf_set_config", 00:03:32.843 "params": { 00:03:32.843 "discovery_filter": "match_any", 00:03:32.843 "admin_cmd_passthru": { 00:03:32.843 "identify_ctrlr": false 00:03:32.843 } 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "nvmf_set_max_subsystems", 00:03:32.843 "params": { 00:03:32.843 "max_subsystems": 1024 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "nvmf_set_crdt", 00:03:32.843 "params": { 00:03:32.843 "crdt1": 0, 00:03:32.843 "crdt2": 0, 00:03:32.843 "crdt3": 0 00:03:32.843 } 00:03:32.843 }, 00:03:32.843 { 00:03:32.843 "method": "nvmf_create_transport", 00:03:32.843 "params": { 00:03:32.843 "trtype": "TCP", 00:03:32.843 "max_queue_depth": 128, 00:03:32.843 "max_io_qpairs_per_ctrlr": 127, 00:03:32.843 "in_capsule_data_size": 4096, 00:03:32.843 "max_io_size": 131072, 00:03:32.843 "io_unit_size": 131072, 00:03:32.843 "max_aq_depth": 128, 00:03:32.844 "num_shared_buffers": 511, 00:03:32.844 "buf_cache_size": 4294967295, 00:03:32.844 "dif_insert_or_strip": false, 00:03:32.844 "zcopy": false, 00:03:32.844 "c2h_success": true, 00:03:32.844 "sock_priority": 0, 00:03:32.844 "abort_timeout_sec": 1, 00:03:32.844 "ack_timeout": 0, 00:03:32.844 "data_wr_pool_size": 0 00:03:32.844 } 00:03:32.844 } 00:03:32.844 ] 00:03:32.844 }, 00:03:32.844 { 00:03:32.844 "subsystem": "iscsi", 00:03:32.844 "config": [ 00:03:32.844 { 00:03:32.844 "method": "iscsi_set_options", 00:03:32.844 "params": { 00:03:32.844 "node_base": "iqn.2016-06.io.spdk", 00:03:32.844 "max_sessions": 128, 00:03:32.844 "max_connections_per_session": 2, 00:03:32.844 "max_queue_depth": 64, 00:03:32.844 "default_time2wait": 2, 00:03:32.844 "default_time2retain": 20, 00:03:32.844 "first_burst_length": 8192, 00:03:32.844 "immediate_data": true, 00:03:32.844 "allow_duplicated_isid": false, 00:03:32.844 "error_recovery_level": 0, 00:03:32.844 "nop_timeout": 60, 00:03:32.844 "nop_in_interval": 30, 00:03:32.844 "disable_chap": false, 00:03:32.844 "require_chap": false, 00:03:32.844 "mutual_chap": false, 00:03:32.844 "chap_group": 0, 00:03:32.844 "max_large_datain_per_connection": 64, 00:03:32.844 "max_r2t_per_connection": 4, 00:03:32.844 "pdu_pool_size": 36864, 00:03:32.844 "immediate_data_pool_size": 16384, 00:03:32.844 "data_out_pool_size": 2048 00:03:32.844 } 00:03:32.844 } 00:03:32.844 ] 00:03:32.844 } 00:03:32.844 ] 00:03:32.844 } 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2106871 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2106871 ']' 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2106871 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:32.844 17:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2106871 00:03:33.102 17:26:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:33.102 17:26:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:33.102 17:26:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2106871' 00:03:33.102 killing process with pid 2106871 00:03:33.102 17:26:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2106871 00:03:33.102 17:26:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2106871 00:03:33.360 17:26:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2107013 00:03:33.360 17:26:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.360 17:26:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2107013 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2107013 ']' 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2107013 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2107013 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2107013' 00:03:38.629 killing process with pid 2107013 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2107013 00:03:38.629 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2107013 00:03:38.888 17:26:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:38.888 17:26:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:38.888 00:03:38.888 real 0m6.663s 00:03:38.888 user 0m6.269s 00:03:38.888 sys 0m0.715s 00:03:38.888 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.888 17:26:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:38.888 ************************************ 00:03:38.888 END TEST skip_rpc_with_json 00:03:38.888 ************************************ 00:03:38.888 17:26:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:38.888 17:26:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:38.888 17:26:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.888 17:26:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.888 17:26:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.888 ************************************ 00:03:38.888 START TEST skip_rpc_with_delay 00:03:38.888 ************************************ 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.888 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:39.146 [2024-07-15 17:26:34.074684] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:39.146 [2024-07-15 17:26:34.074804] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:39.146 00:03:39.146 real 0m0.068s 00:03:39.146 user 0m0.044s 00:03:39.146 sys 0m0.024s 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.146 17:26:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:39.146 ************************************ 00:03:39.146 END TEST skip_rpc_with_delay 00:03:39.146 ************************************ 00:03:39.146 17:26:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:39.146 17:26:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:39.146 17:26:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:39.146 17:26:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:39.146 17:26:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.146 17:26:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.146 17:26:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.146 ************************************ 00:03:39.146 START TEST exit_on_failed_rpc_init 00:03:39.146 ************************************ 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2107732 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2107732 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2107732 ']' 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:39.146 17:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.146 [2024-07-15 17:26:34.191227] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:39.146 [2024-07-15 17:26:34.191328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107732 ] 00:03:39.146 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.146 [2024-07-15 17:26:34.253315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.405 [2024-07-15 17:26:34.368954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:40.341 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:40.341 [2024-07-15 17:26:35.188587] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:40.341 [2024-07-15 17:26:35.188675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107867 ] 00:03:40.341 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.341 [2024-07-15 17:26:35.250860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.341 [2024-07-15 17:26:35.368588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:40.341 [2024-07-15 17:26:35.368724] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:40.341 [2024-07-15 17:26:35.368751] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:40.341 [2024-07-15 17:26:35.368766] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:40.601 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:40.601 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:40.601 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:40.601 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2107732 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2107732 ']' 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2107732 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2107732 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2107732' 00:03:40.602 killing process with pid 2107732 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2107732 00:03:40.602 17:26:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2107732 00:03:41.171 00:03:41.171 real 0m1.862s 00:03:41.171 user 0m2.242s 00:03:41.171 sys 0m0.478s 00:03:41.171 17:26:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.171 17:26:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.171 ************************************ 00:03:41.171 END TEST exit_on_failed_rpc_init 00:03:41.171 ************************************ 00:03:41.171 17:26:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:41.171 17:26:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.171 00:03:41.171 real 0m14.347s 00:03:41.171 user 0m13.832s 00:03:41.171 sys 0m1.718s 00:03:41.171 17:26:36 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.171 17:26:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.171 ************************************ 00:03:41.171 END TEST skip_rpc 00:03:41.171 ************************************ 00:03:41.171 17:26:36 -- common/autotest_common.sh@1142 -- # return 0 00:03:41.171 17:26:36 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:41.171 17:26:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.171 17:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.171 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:03:41.171 ************************************ 00:03:41.171 START TEST rpc_client 00:03:41.171 ************************************ 00:03:41.171 17:26:36 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:41.171 * Looking for test storage... 00:03:41.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:41.171 17:26:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:41.171 OK 00:03:41.171 17:26:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:41.171 00:03:41.171 real 0m0.061s 00:03:41.171 user 0m0.024s 00:03:41.171 sys 0m0.041s 00:03:41.171 17:26:36 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.171 17:26:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:41.171 ************************************ 00:03:41.171 END TEST rpc_client 00:03:41.171 ************************************ 00:03:41.171 17:26:36 -- common/autotest_common.sh@1142 -- # return 0 00:03:41.171 17:26:36 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:41.171 17:26:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.171 17:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.171 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:03:41.171 ************************************ 00:03:41.171 START TEST json_config 00:03:41.171 ************************************ 00:03:41.171 17:26:36 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.171 17:26:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.171 17:26:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.171 17:26:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.171 17:26:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.171 17:26:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.171 17:26:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.171 17:26:36 json_config -- paths/export.sh@5 -- # export PATH 00:03:41.171 17:26:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@47 -- # : 0 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.171 17:26:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:41.171 17:26:36 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:41.172 17:26:36 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:41.172 INFO: JSON configuration test init 00:03:41.172 17:26:36 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:41.172 17:26:36 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.172 17:26:36 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.172 17:26:36 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:41.172 17:26:36 json_config -- json_config/common.sh@9 -- # local app=target 00:03:41.172 17:26:36 json_config -- json_config/common.sh@10 -- # shift 00:03:41.172 17:26:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:41.172 17:26:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:41.172 17:26:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:41.172 17:26:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.172 17:26:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.172 17:26:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2108109 00:03:41.172 17:26:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:41.172 17:26:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:41.172 Waiting for target to run... 00:03:41.172 17:26:36 json_config -- json_config/common.sh@25 -- # waitforlisten 2108109 /var/tmp/spdk_tgt.sock 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@829 -- # '[' -z 2108109 ']' 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:41.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:41.172 17:26:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.172 [2024-07-15 17:26:36.286076] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:41.172 [2024-07-15 17:26:36.286182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108109 ] 00:03:41.432 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.691 [2024-07-15 17:26:36.618871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.692 [2024-07-15 17:26:36.707587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:42.261 17:26:37 json_config -- json_config/common.sh@26 -- # echo '' 00:03:42.261 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:42.261 17:26:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:42.261 17:26:37 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:42.261 17:26:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:45.554 17:26:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.554 17:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:45.554 17:26:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:45.554 17:26:40 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:45.554 17:26:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.554 17:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:45.812 17:26:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.812 17:26:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:45.812 17:26:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:45.812 MallocForNvmf0 00:03:45.812 17:26:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:45.812 17:26:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:46.094 MallocForNvmf1 00:03:46.094 17:26:41 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:46.094 17:26:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:46.351 [2024-07-15 17:26:41.408123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.352 17:26:41 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:46.352 17:26:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:46.610 17:26:41 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:46.610 17:26:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:46.868 17:26:41 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:46.868 17:26:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:47.127 17:26:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:47.127 17:26:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:47.385 [2024-07-15 17:26:42.407341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:47.385 17:26:42 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:47.385 17:26:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.385 17:26:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.385 17:26:42 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:47.385 17:26:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.385 17:26:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.385 17:26:42 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:47.385 17:26:42 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:47.385 17:26:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:47.675 MallocBdevForConfigChangeCheck 00:03:47.675 17:26:42 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:47.675 17:26:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.675 17:26:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.675 17:26:42 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:47.675 17:26:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:48.249 17:26:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:48.249 INFO: shutting down applications... 00:03:48.249 17:26:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:48.249 17:26:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:48.249 17:26:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:48.249 17:26:43 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:49.628 Calling clear_iscsi_subsystem 00:03:49.628 Calling clear_nvmf_subsystem 00:03:49.628 Calling clear_nbd_subsystem 00:03:49.628 Calling clear_ublk_subsystem 00:03:49.628 Calling clear_vhost_blk_subsystem 00:03:49.628 Calling clear_vhost_scsi_subsystem 00:03:49.628 Calling clear_bdev_subsystem 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.628 17:26:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:50.196 17:26:45 json_config -- json_config/json_config.sh@345 -- # break 00:03:50.196 17:26:45 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:50.196 17:26:45 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:50.196 17:26:45 json_config -- json_config/common.sh@31 -- # local app=target 00:03:50.196 17:26:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:50.196 17:26:45 json_config -- json_config/common.sh@35 -- # [[ -n 2108109 ]] 00:03:50.196 17:26:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2108109 00:03:50.196 17:26:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:50.196 17:26:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:50.196 17:26:45 json_config -- json_config/common.sh@41 -- # kill -0 2108109 00:03:50.196 17:26:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:50.765 17:26:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:50.765 17:26:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:50.765 17:26:45 json_config -- json_config/common.sh@41 -- # kill -0 2108109 00:03:50.765 17:26:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:50.765 17:26:45 json_config -- json_config/common.sh@43 -- # break 00:03:50.765 17:26:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:50.765 17:26:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:50.765 SPDK target shutdown done 00:03:50.765 17:26:45 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:50.765 INFO: relaunching applications... 00:03:50.765 17:26:45 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.766 17:26:45 json_config -- json_config/common.sh@9 -- # local app=target 00:03:50.766 17:26:45 json_config -- json_config/common.sh@10 -- # shift 00:03:50.766 17:26:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.766 17:26:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.766 17:26:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.766 17:26:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.766 17:26:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.766 17:26:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2109303 00:03:50.766 17:26:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.766 17:26:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.766 Waiting for target to run... 00:03:50.766 17:26:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2109303 /var/tmp/spdk_tgt.sock 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 2109303 ']' 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:50.766 17:26:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.766 [2024-07-15 17:26:45.698298] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:50.766 [2024-07-15 17:26:45.698403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109303 ] 00:03:50.766 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.331 [2024-07-15 17:26:46.223712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.331 [2024-07-15 17:26:46.331157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.614 [2024-07-15 17:26:49.376670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.614 [2024-07-15 17:26:49.409121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.180 17:26:50 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:55.180 17:26:50 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:55.180 17:26:50 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.180 00:03:55.180 17:26:50 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:55.180 17:26:50 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.180 INFO: Checking if target configuration is the same... 00:03:55.180 17:26:50 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.180 17:26:50 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:55.180 17:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.180 + '[' 2 -ne 2 ']' 00:03:55.180 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.180 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.180 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.180 +++ basename /dev/fd/62 00:03:55.180 ++ mktemp /tmp/62.XXX 00:03:55.180 + tmp_file_1=/tmp/62.u0c 00:03:55.180 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.180 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.180 + tmp_file_2=/tmp/spdk_tgt_config.json.zh5 00:03:55.180 + ret=0 00:03:55.180 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.438 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.438 + diff -u /tmp/62.u0c /tmp/spdk_tgt_config.json.zh5 00:03:55.438 + echo 'INFO: JSON config files are the same' 00:03:55.438 INFO: JSON config files are the same 00:03:55.438 + rm /tmp/62.u0c /tmp/spdk_tgt_config.json.zh5 00:03:55.438 + exit 0 00:03:55.438 17:26:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:55.438 17:26:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:55.438 INFO: changing configuration and checking if this can be detected... 00:03:55.438 17:26:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.438 17:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.696 17:26:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.696 17:26:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:55.696 17:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.696 + '[' 2 -ne 2 ']' 00:03:55.696 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.696 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.696 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.696 +++ basename /dev/fd/62 00:03:55.696 ++ mktemp /tmp/62.XXX 00:03:55.696 + tmp_file_1=/tmp/62.Ym5 00:03:55.696 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.696 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.696 + tmp_file_2=/tmp/spdk_tgt_config.json.xGj 00:03:55.696 + ret=0 00:03:55.696 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.267 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.267 + diff -u /tmp/62.Ym5 /tmp/spdk_tgt_config.json.xGj 00:03:56.267 + ret=1 00:03:56.267 + echo '=== Start of file: /tmp/62.Ym5 ===' 00:03:56.267 + cat /tmp/62.Ym5 00:03:56.267 + echo '=== End of file: /tmp/62.Ym5 ===' 00:03:56.267 + echo '' 00:03:56.267 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xGj ===' 00:03:56.267 + cat /tmp/spdk_tgt_config.json.xGj 00:03:56.267 + echo '=== End of file: /tmp/spdk_tgt_config.json.xGj ===' 00:03:56.267 + echo '' 00:03:56.267 + rm /tmp/62.Ym5 /tmp/spdk_tgt_config.json.xGj 00:03:56.267 + exit 1 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:56.267 INFO: configuration change detected. 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@317 -- # [[ -n 2109303 ]] 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.267 17:26:51 json_config -- json_config/json_config.sh@323 -- # killprocess 2109303 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@948 -- # '[' -z 2109303 ']' 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@952 -- # kill -0 2109303 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@953 -- # uname 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2109303 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2109303' 00:03:56.267 killing process with pid 2109303 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@967 -- # kill 2109303 00:03:56.267 17:26:51 json_config -- common/autotest_common.sh@972 -- # wait 2109303 00:03:58.177 17:26:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.177 17:26:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:58.177 17:26:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:58.177 17:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.177 17:26:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:03:58.177 17:26:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:58.177 INFO: Success 00:03:58.177 00:03:58.177 real 0m16.748s 00:03:58.177 user 0m18.791s 00:03:58.177 sys 0m1.993s 00:03:58.177 17:26:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.177 17:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.177 ************************************ 00:03:58.177 END TEST json_config 00:03:58.177 ************************************ 00:03:58.177 17:26:52 -- common/autotest_common.sh@1142 -- # return 0 00:03:58.177 17:26:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.177 17:26:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.177 17:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.177 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:03:58.177 ************************************ 00:03:58.177 START TEST json_config_extra_key 00:03:58.177 ************************************ 00:03:58.177 17:26:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.177 17:26:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.177 17:26:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.177 17:26:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.177 17:26:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.177 17:26:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.177 17:26:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.177 17:26:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:58.177 17:26:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:58.177 17:26:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:58.177 INFO: launching applications... 00:03:58.177 17:26:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2110322 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.177 17:26:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.178 Waiting for target to run... 00:03:58.178 17:26:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2110322 /var/tmp/spdk_tgt.sock 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2110322 ']' 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:58.178 17:26:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.178 [2024-07-15 17:26:53.073079] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:03:58.178 [2024-07-15 17:26:53.073165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110322 ] 00:03:58.178 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.436 [2024-07-15 17:26:53.414180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.436 [2024-07-15 17:26:53.503024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.001 17:26:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:59.001 17:26:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:03:59.001 17:26:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.001 00:03:59.001 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.001 INFO: shutting down applications... 00:03:59.001 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.001 17:26:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2110322 ]] 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2110322 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2110322 00:03:59.002 17:26:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.569 17:26:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.569 17:26:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.569 17:26:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2110322 00:03:59.569 17:26:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2110322 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.135 17:26:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.135 SPDK target shutdown done 00:04:00.135 17:26:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:00.135 Success 00:04:00.135 00:04:00.135 real 0m2.046s 00:04:00.135 user 0m1.568s 00:04:00.135 sys 0m0.433s 00:04:00.135 17:26:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.135 17:26:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.135 ************************************ 00:04:00.135 END TEST json_config_extra_key 00:04:00.135 ************************************ 00:04:00.135 17:26:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:00.135 17:26:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.135 17:26:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.135 17:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.135 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:04:00.135 ************************************ 00:04:00.135 START TEST alias_rpc 00:04:00.135 ************************************ 00:04:00.135 17:26:55 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.135 * Looking for test storage... 00:04:00.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:00.135 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.136 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2110546 00:04:00.136 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.136 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2110546 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2110546 ']' 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:00.136 17:26:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.136 [2024-07-15 17:26:55.164504] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:00.136 [2024-07-15 17:26:55.164602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110546 ] 00:04:00.136 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.136 [2024-07-15 17:26:55.223546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.395 [2024-07-15 17:26:55.332718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.653 17:26:55 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.653 17:26:55 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:00.653 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.914 17:26:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2110546 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2110546 ']' 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2110546 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2110546 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2110546' 00:04:00.914 killing process with pid 2110546 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 2110546 00:04:00.914 17:26:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 2110546 00:04:01.481 00:04:01.481 real 0m1.302s 00:04:01.481 user 0m1.361s 00:04:01.481 sys 0m0.447s 00:04:01.481 17:26:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.481 17:26:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.481 ************************************ 00:04:01.481 END TEST alias_rpc 00:04:01.481 ************************************ 00:04:01.481 17:26:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.481 17:26:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:01.481 17:26:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.481 17:26:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.481 17:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.481 17:26:56 -- common/autotest_common.sh@10 -- # set +x 00:04:01.481 ************************************ 00:04:01.481 START TEST spdkcli_tcp 00:04:01.481 ************************************ 00:04:01.481 17:26:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.481 * Looking for test storage... 00:04:01.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2110846 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.482 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2110846 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2110846 ']' 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.482 17:26:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.482 [2024-07-15 17:26:56.524656] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:01.482 [2024-07-15 17:26:56.524735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110846 ] 00:04:01.482 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.482 [2024-07-15 17:26:56.582774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.741 [2024-07-15 17:26:56.689524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.741 [2024-07-15 17:26:56.689529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.999 17:26:56 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.999 17:26:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:01.999 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2110858 00:04:01.999 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.999 17:26:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:02.259 [ 00:04:02.259 "bdev_malloc_delete", 00:04:02.259 "bdev_malloc_create", 00:04:02.259 "bdev_null_resize", 00:04:02.259 "bdev_null_delete", 00:04:02.259 "bdev_null_create", 00:04:02.259 "bdev_nvme_cuse_unregister", 00:04:02.259 "bdev_nvme_cuse_register", 00:04:02.259 "bdev_opal_new_user", 00:04:02.259 "bdev_opal_set_lock_state", 00:04:02.259 "bdev_opal_delete", 00:04:02.259 "bdev_opal_get_info", 00:04:02.259 "bdev_opal_create", 00:04:02.259 "bdev_nvme_opal_revert", 00:04:02.259 "bdev_nvme_opal_init", 00:04:02.259 "bdev_nvme_send_cmd", 00:04:02.259 "bdev_nvme_get_path_iostat", 00:04:02.259 "bdev_nvme_get_mdns_discovery_info", 00:04:02.259 "bdev_nvme_stop_mdns_discovery", 00:04:02.259 "bdev_nvme_start_mdns_discovery", 00:04:02.259 "bdev_nvme_set_multipath_policy", 00:04:02.259 "bdev_nvme_set_preferred_path", 00:04:02.259 "bdev_nvme_get_io_paths", 00:04:02.259 "bdev_nvme_remove_error_injection", 00:04:02.259 "bdev_nvme_add_error_injection", 00:04:02.259 "bdev_nvme_get_discovery_info", 00:04:02.259 "bdev_nvme_stop_discovery", 00:04:02.259 "bdev_nvme_start_discovery", 00:04:02.259 "bdev_nvme_get_controller_health_info", 00:04:02.259 "bdev_nvme_disable_controller", 00:04:02.259 "bdev_nvme_enable_controller", 00:04:02.259 "bdev_nvme_reset_controller", 00:04:02.259 "bdev_nvme_get_transport_statistics", 00:04:02.259 "bdev_nvme_apply_firmware", 00:04:02.259 "bdev_nvme_detach_controller", 00:04:02.259 "bdev_nvme_get_controllers", 00:04:02.259 "bdev_nvme_attach_controller", 00:04:02.259 "bdev_nvme_set_hotplug", 00:04:02.259 "bdev_nvme_set_options", 00:04:02.259 "bdev_passthru_delete", 00:04:02.259 "bdev_passthru_create", 00:04:02.259 "bdev_lvol_set_parent_bdev", 00:04:02.259 "bdev_lvol_set_parent", 00:04:02.259 "bdev_lvol_check_shallow_copy", 00:04:02.259 "bdev_lvol_start_shallow_copy", 00:04:02.259 "bdev_lvol_grow_lvstore", 00:04:02.259 "bdev_lvol_get_lvols", 00:04:02.259 "bdev_lvol_get_lvstores", 00:04:02.259 "bdev_lvol_delete", 00:04:02.259 "bdev_lvol_set_read_only", 00:04:02.259 "bdev_lvol_resize", 00:04:02.259 "bdev_lvol_decouple_parent", 00:04:02.259 "bdev_lvol_inflate", 00:04:02.259 "bdev_lvol_rename", 00:04:02.259 "bdev_lvol_clone_bdev", 00:04:02.259 "bdev_lvol_clone", 00:04:02.259 "bdev_lvol_snapshot", 00:04:02.259 "bdev_lvol_create", 00:04:02.259 "bdev_lvol_delete_lvstore", 00:04:02.259 "bdev_lvol_rename_lvstore", 00:04:02.259 "bdev_lvol_create_lvstore", 00:04:02.259 "bdev_raid_set_options", 00:04:02.259 "bdev_raid_remove_base_bdev", 00:04:02.259 "bdev_raid_add_base_bdev", 00:04:02.259 "bdev_raid_delete", 00:04:02.259 "bdev_raid_create", 00:04:02.259 "bdev_raid_get_bdevs", 00:04:02.259 "bdev_error_inject_error", 00:04:02.259 "bdev_error_delete", 00:04:02.259 "bdev_error_create", 00:04:02.259 "bdev_split_delete", 00:04:02.259 "bdev_split_create", 00:04:02.259 "bdev_delay_delete", 00:04:02.259 "bdev_delay_create", 00:04:02.259 "bdev_delay_update_latency", 00:04:02.259 "bdev_zone_block_delete", 00:04:02.259 "bdev_zone_block_create", 00:04:02.259 "blobfs_create", 00:04:02.259 "blobfs_detect", 00:04:02.260 "blobfs_set_cache_size", 00:04:02.260 "bdev_aio_delete", 00:04:02.260 "bdev_aio_rescan", 00:04:02.260 "bdev_aio_create", 00:04:02.260 "bdev_ftl_set_property", 00:04:02.260 "bdev_ftl_get_properties", 00:04:02.260 "bdev_ftl_get_stats", 00:04:02.260 "bdev_ftl_unmap", 00:04:02.260 "bdev_ftl_unload", 00:04:02.260 "bdev_ftl_delete", 00:04:02.260 "bdev_ftl_load", 00:04:02.260 "bdev_ftl_create", 00:04:02.260 "bdev_virtio_attach_controller", 00:04:02.260 "bdev_virtio_scsi_get_devices", 00:04:02.260 "bdev_virtio_detach_controller", 00:04:02.260 "bdev_virtio_blk_set_hotplug", 00:04:02.260 "bdev_iscsi_delete", 00:04:02.260 "bdev_iscsi_create", 00:04:02.260 "bdev_iscsi_set_options", 00:04:02.260 "accel_error_inject_error", 00:04:02.260 "ioat_scan_accel_module", 00:04:02.260 "dsa_scan_accel_module", 00:04:02.260 "iaa_scan_accel_module", 00:04:02.260 "vfu_virtio_create_scsi_endpoint", 00:04:02.260 "vfu_virtio_scsi_remove_target", 00:04:02.260 "vfu_virtio_scsi_add_target", 00:04:02.260 "vfu_virtio_create_blk_endpoint", 00:04:02.260 "vfu_virtio_delete_endpoint", 00:04:02.260 "keyring_file_remove_key", 00:04:02.260 "keyring_file_add_key", 00:04:02.260 "keyring_linux_set_options", 00:04:02.260 "iscsi_get_histogram", 00:04:02.260 "iscsi_enable_histogram", 00:04:02.260 "iscsi_set_options", 00:04:02.260 "iscsi_get_auth_groups", 00:04:02.260 "iscsi_auth_group_remove_secret", 00:04:02.260 "iscsi_auth_group_add_secret", 00:04:02.260 "iscsi_delete_auth_group", 00:04:02.260 "iscsi_create_auth_group", 00:04:02.260 "iscsi_set_discovery_auth", 00:04:02.260 "iscsi_get_options", 00:04:02.260 "iscsi_target_node_request_logout", 00:04:02.260 "iscsi_target_node_set_redirect", 00:04:02.260 "iscsi_target_node_set_auth", 00:04:02.260 "iscsi_target_node_add_lun", 00:04:02.260 "iscsi_get_stats", 00:04:02.260 "iscsi_get_connections", 00:04:02.260 "iscsi_portal_group_set_auth", 00:04:02.260 "iscsi_start_portal_group", 00:04:02.260 "iscsi_delete_portal_group", 00:04:02.260 "iscsi_create_portal_group", 00:04:02.260 "iscsi_get_portal_groups", 00:04:02.260 "iscsi_delete_target_node", 00:04:02.260 "iscsi_target_node_remove_pg_ig_maps", 00:04:02.260 "iscsi_target_node_add_pg_ig_maps", 00:04:02.260 "iscsi_create_target_node", 00:04:02.260 "iscsi_get_target_nodes", 00:04:02.260 "iscsi_delete_initiator_group", 00:04:02.260 "iscsi_initiator_group_remove_initiators", 00:04:02.260 "iscsi_initiator_group_add_initiators", 00:04:02.260 "iscsi_create_initiator_group", 00:04:02.260 "iscsi_get_initiator_groups", 00:04:02.260 "nvmf_set_crdt", 00:04:02.260 "nvmf_set_config", 00:04:02.260 "nvmf_set_max_subsystems", 00:04:02.260 "nvmf_stop_mdns_prr", 00:04:02.260 "nvmf_publish_mdns_prr", 00:04:02.260 "nvmf_subsystem_get_listeners", 00:04:02.260 "nvmf_subsystem_get_qpairs", 00:04:02.260 "nvmf_subsystem_get_controllers", 00:04:02.260 "nvmf_get_stats", 00:04:02.260 "nvmf_get_transports", 00:04:02.260 "nvmf_create_transport", 00:04:02.260 "nvmf_get_targets", 00:04:02.260 "nvmf_delete_target", 00:04:02.260 "nvmf_create_target", 00:04:02.260 "nvmf_subsystem_allow_any_host", 00:04:02.260 "nvmf_subsystem_remove_host", 00:04:02.260 "nvmf_subsystem_add_host", 00:04:02.260 "nvmf_ns_remove_host", 00:04:02.260 "nvmf_ns_add_host", 00:04:02.260 "nvmf_subsystem_remove_ns", 00:04:02.260 "nvmf_subsystem_add_ns", 00:04:02.260 "nvmf_subsystem_listener_set_ana_state", 00:04:02.260 "nvmf_discovery_get_referrals", 00:04:02.260 "nvmf_discovery_remove_referral", 00:04:02.260 "nvmf_discovery_add_referral", 00:04:02.260 "nvmf_subsystem_remove_listener", 00:04:02.260 "nvmf_subsystem_add_listener", 00:04:02.260 "nvmf_delete_subsystem", 00:04:02.260 "nvmf_create_subsystem", 00:04:02.260 "nvmf_get_subsystems", 00:04:02.260 "env_dpdk_get_mem_stats", 00:04:02.260 "nbd_get_disks", 00:04:02.260 "nbd_stop_disk", 00:04:02.260 "nbd_start_disk", 00:04:02.260 "ublk_recover_disk", 00:04:02.260 "ublk_get_disks", 00:04:02.260 "ublk_stop_disk", 00:04:02.260 "ublk_start_disk", 00:04:02.260 "ublk_destroy_target", 00:04:02.260 "ublk_create_target", 00:04:02.260 "virtio_blk_create_transport", 00:04:02.260 "virtio_blk_get_transports", 00:04:02.260 "vhost_controller_set_coalescing", 00:04:02.260 "vhost_get_controllers", 00:04:02.260 "vhost_delete_controller", 00:04:02.260 "vhost_create_blk_controller", 00:04:02.260 "vhost_scsi_controller_remove_target", 00:04:02.260 "vhost_scsi_controller_add_target", 00:04:02.260 "vhost_start_scsi_controller", 00:04:02.260 "vhost_create_scsi_controller", 00:04:02.260 "thread_set_cpumask", 00:04:02.260 "framework_get_governor", 00:04:02.260 "framework_get_scheduler", 00:04:02.260 "framework_set_scheduler", 00:04:02.260 "framework_get_reactors", 00:04:02.260 "thread_get_io_channels", 00:04:02.260 "thread_get_pollers", 00:04:02.260 "thread_get_stats", 00:04:02.260 "framework_monitor_context_switch", 00:04:02.260 "spdk_kill_instance", 00:04:02.260 "log_enable_timestamps", 00:04:02.260 "log_get_flags", 00:04:02.260 "log_clear_flag", 00:04:02.260 "log_set_flag", 00:04:02.260 "log_get_level", 00:04:02.260 "log_set_level", 00:04:02.260 "log_get_print_level", 00:04:02.260 "log_set_print_level", 00:04:02.260 "framework_enable_cpumask_locks", 00:04:02.260 "framework_disable_cpumask_locks", 00:04:02.260 "framework_wait_init", 00:04:02.260 "framework_start_init", 00:04:02.260 "scsi_get_devices", 00:04:02.260 "bdev_get_histogram", 00:04:02.260 "bdev_enable_histogram", 00:04:02.260 "bdev_set_qos_limit", 00:04:02.260 "bdev_set_qd_sampling_period", 00:04:02.260 "bdev_get_bdevs", 00:04:02.260 "bdev_reset_iostat", 00:04:02.260 "bdev_get_iostat", 00:04:02.261 "bdev_examine", 00:04:02.261 "bdev_wait_for_examine", 00:04:02.261 "bdev_set_options", 00:04:02.261 "notify_get_notifications", 00:04:02.261 "notify_get_types", 00:04:02.261 "accel_get_stats", 00:04:02.261 "accel_set_options", 00:04:02.261 "accel_set_driver", 00:04:02.261 "accel_crypto_key_destroy", 00:04:02.261 "accel_crypto_keys_get", 00:04:02.261 "accel_crypto_key_create", 00:04:02.261 "accel_assign_opc", 00:04:02.261 "accel_get_module_info", 00:04:02.261 "accel_get_opc_assignments", 00:04:02.261 "vmd_rescan", 00:04:02.261 "vmd_remove_device", 00:04:02.261 "vmd_enable", 00:04:02.261 "sock_get_default_impl", 00:04:02.261 "sock_set_default_impl", 00:04:02.261 "sock_impl_set_options", 00:04:02.261 "sock_impl_get_options", 00:04:02.261 "iobuf_get_stats", 00:04:02.261 "iobuf_set_options", 00:04:02.261 "keyring_get_keys", 00:04:02.261 "framework_get_pci_devices", 00:04:02.261 "framework_get_config", 00:04:02.261 "framework_get_subsystems", 00:04:02.261 "vfu_tgt_set_base_path", 00:04:02.261 "trace_get_info", 00:04:02.261 "trace_get_tpoint_group_mask", 00:04:02.261 "trace_disable_tpoint_group", 00:04:02.261 "trace_enable_tpoint_group", 00:04:02.261 "trace_clear_tpoint_mask", 00:04:02.261 "trace_set_tpoint_mask", 00:04:02.261 "spdk_get_version", 00:04:02.261 "rpc_get_methods" 00:04:02.261 ] 00:04:02.261 17:26:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.261 17:26:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:02.261 17:26:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2110846 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2110846 ']' 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2110846 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2110846 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2110846' 00:04:02.261 killing process with pid 2110846 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2110846 00:04:02.261 17:26:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2110846 00:04:02.831 00:04:02.831 real 0m1.307s 00:04:02.831 user 0m2.276s 00:04:02.831 sys 0m0.457s 00:04:02.831 17:26:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.831 17:26:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.831 ************************************ 00:04:02.831 END TEST spdkcli_tcp 00:04:02.831 ************************************ 00:04:02.831 17:26:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.831 17:26:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.831 17:26:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.831 17:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.831 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:04:02.831 ************************************ 00:04:02.831 START TEST dpdk_mem_utility 00:04:02.831 ************************************ 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.831 * Looking for test storage... 00:04:02.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.831 17:26:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.831 17:26:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2111051 00:04:02.831 17:26:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.831 17:26:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2111051 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2111051 ']' 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.831 17:26:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.831 [2024-07-15 17:26:57.870364] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:02.831 [2024-07-15 17:26:57.870448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111051 ] 00:04:02.831 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.831 [2024-07-15 17:26:57.926182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.092 [2024-07-15 17:26:58.032937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.386 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.386 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:03.386 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.386 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.386 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.386 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.386 { 00:04:03.386 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.386 } 00:04:03.386 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.386 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.386 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:03.386 1 heaps totaling size 814.000000 MiB 00:04:03.386 size: 814.000000 MiB heap id: 0 00:04:03.386 end heaps---------- 00:04:03.386 8 mempools totaling size 598.116089 MiB 00:04:03.386 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.386 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.386 size: 84.521057 MiB name: bdev_io_2111051 00:04:03.386 size: 51.011292 MiB name: evtpool_2111051 00:04:03.386 size: 50.003479 MiB name: msgpool_2111051 00:04:03.386 size: 21.763794 MiB name: PDU_Pool 00:04:03.386 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.386 size: 0.026123 MiB name: Session_Pool 00:04:03.386 end mempools------- 00:04:03.386 6 memzones totaling size 4.142822 MiB 00:04:03.386 size: 1.000366 MiB name: RG_ring_0_2111051 00:04:03.386 size: 1.000366 MiB name: RG_ring_1_2111051 00:04:03.386 size: 1.000366 MiB name: RG_ring_4_2111051 00:04:03.386 size: 1.000366 MiB name: RG_ring_5_2111051 00:04:03.386 size: 0.125366 MiB name: RG_ring_2_2111051 00:04:03.386 size: 0.015991 MiB name: RG_ring_3_2111051 00:04:03.386 end memzones------- 00:04:03.386 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.386 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:03.386 list of free elements. size: 12.519348 MiB 00:04:03.386 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:03.386 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:03.386 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:03.386 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:03.386 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:03.386 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:03.386 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:03.386 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:03.386 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:03.386 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:03.386 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:03.386 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:03.386 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:03.386 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:03.386 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:03.386 list of standard malloc elements. size: 199.218079 MiB 00:04:03.386 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:03.386 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:03.386 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.386 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:03.386 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:03.386 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.386 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:03.386 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.386 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:03.386 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:03.386 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:03.386 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:03.386 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:03.387 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:03.387 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:03.387 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:03.387 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:03.387 list of memzone associated elements. size: 602.262573 MiB 00:04:03.387 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:03.387 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.387 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:03.387 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.387 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:03.387 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2111051_0 00:04:03.387 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:03.387 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2111051_0 00:04:03.387 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:03.387 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2111051_0 00:04:03.387 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:03.387 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.387 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:03.387 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.387 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:03.387 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2111051 00:04:03.387 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:03.387 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2111051 00:04:03.387 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.387 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2111051 00:04:03.387 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:03.387 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.387 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:03.387 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.387 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:03.387 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.387 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:03.387 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.387 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:03.387 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2111051 00:04:03.387 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:03.387 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2111051 00:04:03.387 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:03.387 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2111051 00:04:03.387 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:03.387 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2111051 00:04:03.387 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:03.387 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2111051 00:04:03.387 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:03.387 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.387 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:03.387 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.387 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:03.387 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.387 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:03.387 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2111051 00:04:03.387 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:03.387 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.387 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:03.387 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.387 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:03.387 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2111051 00:04:03.387 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:03.387 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.387 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:03.387 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2111051 00:04:03.387 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:03.387 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2111051 00:04:03.387 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:03.387 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.387 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.387 17:26:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2111051 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2111051 ']' 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2111051 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2111051 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2111051' 00:04:03.387 killing process with pid 2111051 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2111051 00:04:03.387 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2111051 00:04:03.958 00:04:03.958 real 0m1.124s 00:04:03.958 user 0m1.090s 00:04:03.958 sys 0m0.394s 00:04:03.958 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.958 17:26:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.958 ************************************ 00:04:03.958 END TEST dpdk_mem_utility 00:04:03.958 ************************************ 00:04:03.958 17:26:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:03.958 17:26:58 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.958 17:26:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.958 17:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.958 17:26:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.958 ************************************ 00:04:03.958 START TEST event 00:04:03.958 ************************************ 00:04:03.958 17:26:58 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.958 * Looking for test storage... 00:04:03.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.958 17:26:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:03.958 17:26:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:03.958 17:26:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.959 17:26:58 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:03.959 17:26:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.959 17:26:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.959 ************************************ 00:04:03.959 START TEST event_perf 00:04:03.959 ************************************ 00:04:03.959 17:26:59 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.959 Running I/O for 1 seconds...[2024-07-15 17:26:59.032822] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:03.959 [2024-07-15 17:26:59.032956] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111244 ] 00:04:03.959 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.217 [2024-07-15 17:26:59.100545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.217 [2024-07-15 17:26:59.223772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.217 [2024-07-15 17:26:59.223826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.217 [2024-07-15 17:26:59.223947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:04.217 [2024-07-15 17:26:59.223951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.596 Running I/O for 1 seconds... 00:04:05.596 lcore 0: 239229 00:04:05.596 lcore 1: 239230 00:04:05.596 lcore 2: 239229 00:04:05.596 lcore 3: 239228 00:04:05.596 done. 00:04:05.596 00:04:05.596 real 0m1.327s 00:04:05.596 user 0m4.225s 00:04:05.596 sys 0m0.092s 00:04:05.596 17:27:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.596 17:27:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 ************************************ 00:04:05.596 END TEST event_perf 00:04:05.596 ************************************ 00:04:05.596 17:27:00 event -- common/autotest_common.sh@1142 -- # return 0 00:04:05.596 17:27:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.596 17:27:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:05.596 17:27:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.596 17:27:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 ************************************ 00:04:05.596 START TEST event_reactor 00:04:05.596 ************************************ 00:04:05.596 17:27:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.596 [2024-07-15 17:27:00.402375] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:05.597 [2024-07-15 17:27:00.402427] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111434 ] 00:04:05.597 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.597 [2024-07-15 17:27:00.463457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.597 [2024-07-15 17:27:00.583970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.971 test_start 00:04:06.971 oneshot 00:04:06.971 tick 100 00:04:06.971 tick 100 00:04:06.971 tick 250 00:04:06.971 tick 100 00:04:06.971 tick 100 00:04:06.971 tick 250 00:04:06.971 tick 100 00:04:06.971 tick 500 00:04:06.971 tick 100 00:04:06.971 tick 100 00:04:06.971 tick 250 00:04:06.971 tick 100 00:04:06.971 tick 100 00:04:06.971 test_end 00:04:06.971 00:04:06.971 real 0m1.308s 00:04:06.971 user 0m1.224s 00:04:06.971 sys 0m0.079s 00:04:06.971 17:27:01 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.971 17:27:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.971 ************************************ 00:04:06.971 END TEST event_reactor 00:04:06.971 ************************************ 00:04:06.971 17:27:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:06.971 17:27:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.971 17:27:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:06.971 17:27:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.971 17:27:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.971 ************************************ 00:04:06.971 START TEST event_reactor_perf 00:04:06.971 ************************************ 00:04:06.971 17:27:01 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.971 [2024-07-15 17:27:01.754637] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:06.971 [2024-07-15 17:27:01.754705] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111668 ] 00:04:06.971 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.971 [2024-07-15 17:27:01.816589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.971 [2024-07-15 17:27:01.935830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.348 test_start 00:04:08.348 test_end 00:04:08.348 Performance: 363552 events per second 00:04:08.348 00:04:08.348 real 0m1.310s 00:04:08.348 user 0m1.222s 00:04:08.348 sys 0m0.081s 00:04:08.348 17:27:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.348 17:27:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:08.348 ************************************ 00:04:08.348 END TEST event_reactor_perf 00:04:08.348 ************************************ 00:04:08.348 17:27:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:08.348 17:27:03 event -- event/event.sh@49 -- # uname -s 00:04:08.348 17:27:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:08.348 17:27:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:08.348 17:27:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.348 17:27:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.348 17:27:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.348 ************************************ 00:04:08.348 START TEST event_scheduler 00:04:08.348 ************************************ 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:08.348 * Looking for test storage... 00:04:08.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2111908 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2111908 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2111908 ']' 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.348 [2024-07-15 17:27:03.199911] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:08.348 [2024-07-15 17:27:03.200020] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111908 ] 00:04:08.348 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.348 [2024-07-15 17:27:03.260515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.348 [2024-07-15 17:27:03.366774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.348 [2024-07-15 17:27:03.366844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.348 [2024-07-15 17:27:03.366917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.348 [2024-07-15 17:27:03.366921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.348 [2024-07-15 17:27:03.423706] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:08.348 [2024-07-15 17:27:03.423731] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:08.348 [2024-07-15 17:27:03.423755] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:08.348 [2024-07-15 17:27:03.423765] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:08.348 [2024-07-15 17:27:03.423775] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.348 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.348 17:27:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 [2024-07-15 17:27:03.522345] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:08.608 17:27:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.608 17:27:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:08.608 17:27:03 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.608 17:27:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.608 17:27:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 ************************************ 00:04:08.608 START TEST scheduler_create_thread 00:04:08.608 ************************************ 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 2 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 3 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 4 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.608 5 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.608 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 6 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 7 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 8 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 9 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 10 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.609 17:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.177 17:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.177 00:04:09.177 real 0m0.589s 00:04:09.177 user 0m0.014s 00:04:09.177 sys 0m0.002s 00:04:09.177 17:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.177 17:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.177 ************************************ 00:04:09.177 END TEST scheduler_create_thread 00:04:09.177 ************************************ 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:09.177 17:27:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:09.177 17:27:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2111908 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2111908 ']' 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2111908 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2111908 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:09.177 17:27:04 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2111908' 00:04:09.178 killing process with pid 2111908 00:04:09.178 17:27:04 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2111908 00:04:09.178 17:27:04 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2111908 00:04:09.743 [2024-07-15 17:27:04.622851] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:09.743 00:04:09.743 real 0m1.760s 00:04:09.743 user 0m2.227s 00:04:09.743 sys 0m0.343s 00:04:09.743 17:27:04 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.743 17:27:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:09.743 ************************************ 00:04:09.743 END TEST event_scheduler 00:04:09.743 ************************************ 00:04:10.001 17:27:04 event -- common/autotest_common.sh@1142 -- # return 0 00:04:10.001 17:27:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:10.001 17:27:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:10.001 17:27:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.001 17:27:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.001 17:27:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.001 ************************************ 00:04:10.001 START TEST app_repeat 00:04:10.001 ************************************ 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2112167 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2112167' 00:04:10.001 Process app_repeat pid: 2112167 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:10.001 spdk_app_start Round 0 00:04:10.001 17:27:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2112167 /var/tmp/spdk-nbd.sock 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2112167 ']' 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:10.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.001 17:27:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:10.001 [2024-07-15 17:27:04.950542] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:10.001 [2024-07-15 17:27:04.950602] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112167 ] 00:04:10.001 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.001 [2024-07-15 17:27:05.014194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.001 [2024-07-15 17:27:05.130068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.001 [2024-07-15 17:27:05.130073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.259 17:27:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:10.259 17:27:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:10.259 17:27:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.517 Malloc0 00:04:10.517 17:27:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:10.774 Malloc1 00:04:10.774 17:27:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:10.774 17:27:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.032 /dev/nbd0 00:04:11.032 17:27:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:11.032 17:27:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.032 1+0 records in 00:04:11.032 1+0 records out 00:04:11.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174952 s, 23.4 MB/s 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:11.032 17:27:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:11.032 17:27:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.032 17:27:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.032 17:27:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.289 /dev/nbd1 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.289 1+0 records in 00:04:11.289 1+0 records out 00:04:11.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175341 s, 23.4 MB/s 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:11.289 17:27:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.289 17:27:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:11.547 { 00:04:11.547 "nbd_device": "/dev/nbd0", 00:04:11.547 "bdev_name": "Malloc0" 00:04:11.547 }, 00:04:11.547 { 00:04:11.547 "nbd_device": "/dev/nbd1", 00:04:11.547 "bdev_name": "Malloc1" 00:04:11.547 } 00:04:11.547 ]' 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:11.547 { 00:04:11.547 "nbd_device": "/dev/nbd0", 00:04:11.547 "bdev_name": "Malloc0" 00:04:11.547 }, 00:04:11.547 { 00:04:11.547 "nbd_device": "/dev/nbd1", 00:04:11.547 "bdev_name": "Malloc1" 00:04:11.547 } 00:04:11.547 ]' 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:11.547 /dev/nbd1' 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:11.547 /dev/nbd1' 00:04:11.547 17:27:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:11.804 256+0 records in 00:04:11.804 256+0 records out 00:04:11.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371725 s, 282 MB/s 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:11.804 256+0 records in 00:04:11.804 256+0 records out 00:04:11.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240768 s, 43.6 MB/s 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:11.804 256+0 records in 00:04:11.804 256+0 records out 00:04:11.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217887 s, 48.1 MB/s 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:11.804 17:27:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:11.805 17:27:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.060 17:27:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.316 17:27:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:12.580 17:27:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:12.580 17:27:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:12.838 17:27:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:13.095 [2024-07-15 17:27:08.168348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.353 [2024-07-15 17:27:08.281787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.354 [2024-07-15 17:27:08.281787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.354 [2024-07-15 17:27:08.343371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.354 [2024-07-15 17:27:08.343451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:15.881 17:27:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:15.881 17:27:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:15.881 spdk_app_start Round 1 00:04:15.881 17:27:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2112167 /var/tmp/spdk-nbd.sock 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2112167 ']' 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:15.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.881 17:27:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:16.139 17:27:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.139 17:27:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:16.139 17:27:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.397 Malloc0 00:04:16.397 17:27:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.655 Malloc1 00:04:16.655 17:27:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.655 17:27:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:16.913 /dev/nbd0 00:04:16.913 17:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:16.913 17:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:16.913 1+0 records in 00:04:16.913 1+0 records out 00:04:16.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178344 s, 23.0 MB/s 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:16.913 17:27:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:16.913 17:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:16.913 17:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.913 17:27:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:17.171 /dev/nbd1 00:04:17.171 17:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:17.171 17:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:17.171 17:27:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:17.171 17:27:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:17.171 17:27:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:17.171 17:27:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:17.171 17:27:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.433 1+0 records in 00:04:17.433 1+0 records out 00:04:17.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165273 s, 24.8 MB/s 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:17.433 17:27:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:17.433 17:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.433 17:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.433 17:27:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.433 17:27:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.433 17:27:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.724 { 00:04:17.724 "nbd_device": "/dev/nbd0", 00:04:17.724 "bdev_name": "Malloc0" 00:04:17.724 }, 00:04:17.724 { 00:04:17.724 "nbd_device": "/dev/nbd1", 00:04:17.724 "bdev_name": "Malloc1" 00:04:17.724 } 00:04:17.724 ]' 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.724 { 00:04:17.724 "nbd_device": "/dev/nbd0", 00:04:17.724 "bdev_name": "Malloc0" 00:04:17.724 }, 00:04:17.724 { 00:04:17.724 "nbd_device": "/dev/nbd1", 00:04:17.724 "bdev_name": "Malloc1" 00:04:17.724 } 00:04:17.724 ]' 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.724 /dev/nbd1' 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.724 /dev/nbd1' 00:04:17.724 17:27:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.725 256+0 records in 00:04:17.725 256+0 records out 00:04:17.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494543 s, 212 MB/s 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.725 256+0 records in 00:04:17.725 256+0 records out 00:04:17.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201662 s, 52.0 MB/s 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.725 256+0 records in 00:04:17.725 256+0 records out 00:04:17.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242756 s, 43.2 MB/s 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.725 17:27:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.983 17:27:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.241 17:27:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.499 17:27:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.499 17:27:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:18.757 17:27:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.017 [2024-07-15 17:27:14.118555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.275 [2024-07-15 17:27:14.233983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.275 [2024-07-15 17:27:14.233988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.275 [2024-07-15 17:27:14.295461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.275 [2024-07-15 17:27:14.295532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:21.814 17:27:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:21.814 17:27:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:21.814 spdk_app_start Round 2 00:04:21.814 17:27:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2112167 /var/tmp/spdk-nbd.sock 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2112167 ']' 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:21.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.814 17:27:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.072 17:27:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.072 17:27:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:22.072 17:27:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.330 Malloc0 00:04:22.330 17:27:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.589 Malloc1 00:04:22.589 17:27:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.589 17:27:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:22.847 /dev/nbd0 00:04:22.847 17:27:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:22.847 17:27:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:22.847 1+0 records in 00:04:22.847 1+0 records out 00:04:22.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018422 s, 22.2 MB/s 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:22.847 17:27:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:22.847 17:27:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:22.847 17:27:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.847 17:27:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.106 /dev/nbd1 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.106 1+0 records in 00:04:23.106 1+0 records out 00:04:23.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195662 s, 20.9 MB/s 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:23.106 17:27:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.106 17:27:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.365 { 00:04:23.365 "nbd_device": "/dev/nbd0", 00:04:23.365 "bdev_name": "Malloc0" 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "nbd_device": "/dev/nbd1", 00:04:23.365 "bdev_name": "Malloc1" 00:04:23.365 } 00:04:23.365 ]' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.365 { 00:04:23.365 "nbd_device": "/dev/nbd0", 00:04:23.365 "bdev_name": "Malloc0" 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "nbd_device": "/dev/nbd1", 00:04:23.365 "bdev_name": "Malloc1" 00:04:23.365 } 00:04:23.365 ]' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.365 /dev/nbd1' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.365 /dev/nbd1' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.365 256+0 records in 00:04:23.365 256+0 records out 00:04:23.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403852 s, 260 MB/s 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.365 17:27:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.624 256+0 records in 00:04:23.624 256+0 records out 00:04:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240565 s, 43.6 MB/s 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.624 256+0 records in 00:04:23.624 256+0 records out 00:04:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225427 s, 46.5 MB/s 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.624 17:27:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.882 17:27:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.140 17:27:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.140 17:27:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.140 17:27:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.140 17:27:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.140 17:27:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.141 17:27:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.399 17:27:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.399 17:27:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.658 17:27:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.916 [2024-07-15 17:27:19.949673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.175 [2024-07-15 17:27:20.070527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.175 [2024-07-15 17:27:20.070527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.175 [2024-07-15 17:27:20.132619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.175 [2024-07-15 17:27:20.132700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.712 17:27:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2112167 /var/tmp/spdk-nbd.sock 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2112167 ']' 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.712 17:27:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:27.970 17:27:22 event.app_repeat -- event/event.sh@39 -- # killprocess 2112167 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2112167 ']' 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2112167 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2112167 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2112167' 00:04:27.970 killing process with pid 2112167 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2112167 00:04:27.970 17:27:22 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2112167 00:04:28.227 spdk_app_start is called in Round 0. 00:04:28.227 Shutdown signal received, stop current app iteration 00:04:28.227 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:04:28.227 spdk_app_start is called in Round 1. 00:04:28.227 Shutdown signal received, stop current app iteration 00:04:28.227 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:04:28.227 spdk_app_start is called in Round 2. 00:04:28.227 Shutdown signal received, stop current app iteration 00:04:28.227 Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 reinitialization... 00:04:28.227 spdk_app_start is called in Round 3. 00:04:28.227 Shutdown signal received, stop current app iteration 00:04:28.228 17:27:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.228 17:27:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.228 00:04:28.228 real 0m18.255s 00:04:28.228 user 0m39.592s 00:04:28.228 sys 0m3.312s 00:04:28.228 17:27:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.228 17:27:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.228 ************************************ 00:04:28.228 END TEST app_repeat 00:04:28.228 ************************************ 00:04:28.228 17:27:23 event -- common/autotest_common.sh@1142 -- # return 0 00:04:28.228 17:27:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.228 17:27:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.228 17:27:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.228 17:27:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.228 17:27:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.228 ************************************ 00:04:28.228 START TEST cpu_locks 00:04:28.228 ************************************ 00:04:28.228 17:27:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.228 * Looking for test storage... 00:04:28.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.228 17:27:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.228 17:27:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.228 17:27:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.228 17:27:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.228 17:27:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.228 17:27:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.228 17:27:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.228 ************************************ 00:04:28.228 START TEST default_locks 00:04:28.228 ************************************ 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2115032 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2115032 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2115032 ']' 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.228 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.485 [2024-07-15 17:27:23.365567] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:28.485 [2024-07-15 17:27:23.365669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115032 ] 00:04:28.485 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.485 [2024-07-15 17:27:23.426257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.485 [2024-07-15 17:27:23.533896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.742 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.742 17:27:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:28.742 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2115032 00:04:28.742 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2115032 00:04:28.742 17:27:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.306 lslocks: write error 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2115032 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2115032 ']' 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2115032 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115032 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115032' 00:04:29.306 killing process with pid 2115032 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2115032 00:04:29.306 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2115032 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2115032 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2115032 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2115032 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2115032 ']' 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2115032) - No such process 00:04:29.872 ERROR: process (pid: 2115032) is no longer running 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.872 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.873 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.873 17:27:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.873 00:04:29.873 real 0m1.398s 00:04:29.873 user 0m1.340s 00:04:29.873 sys 0m0.557s 00:04:29.873 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.873 17:27:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 ************************************ 00:04:29.873 END TEST default_locks 00:04:29.873 ************************************ 00:04:29.873 17:27:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:29.873 17:27:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.873 17:27:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.873 17:27:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.873 17:27:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 ************************************ 00:04:29.873 START TEST default_locks_via_rpc 00:04:29.873 ************************************ 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2115309 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2115309 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2115309 ']' 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.873 17:27:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.873 [2024-07-15 17:27:24.813157] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:29.873 [2024-07-15 17:27:24.813261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115309 ] 00:04:29.873 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.873 [2024-07-15 17:27:24.870404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.873 [2024-07-15 17:27:24.986004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.130 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.131 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.388 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:30.388 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2115309 00:04:30.388 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2115309 00:04:30.388 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2115309 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2115309 ']' 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2115309 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115309 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115309' 00:04:30.646 killing process with pid 2115309 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2115309 00:04:30.646 17:27:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2115309 00:04:30.905 00:04:30.905 real 0m1.249s 00:04:30.905 user 0m1.170s 00:04:30.905 sys 0m0.506s 00:04:30.905 17:27:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.905 17:27:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.905 ************************************ 00:04:30.905 END TEST default_locks_via_rpc 00:04:30.905 ************************************ 00:04:30.905 17:27:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:30.905 17:27:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:30.905 17:27:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.905 17:27:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.905 17:27:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.163 ************************************ 00:04:31.163 START TEST non_locking_app_on_locked_coremask 00:04:31.163 ************************************ 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2115475 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2115475 /var/tmp/spdk.sock 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2115475 ']' 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.163 17:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.163 [2024-07-15 17:27:26.108635] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:31.163 [2024-07-15 17:27:26.108709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115475 ] 00:04:31.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.163 [2024-07-15 17:27:26.170546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.163 [2024-07-15 17:27:26.286117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2115607 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2115607 /var/tmp/spdk2.sock 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2115607 ']' 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.102 17:27:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 [2024-07-15 17:27:27.089432] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:32.102 [2024-07-15 17:27:27.089528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115607 ] 00:04:32.102 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.102 [2024-07-15 17:27:27.188015] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.102 [2024-07-15 17:27:27.188055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.389 [2024-07-15 17:27:27.426484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.957 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.957 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:32.957 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2115475 00:04:32.957 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2115475 00:04:32.957 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.522 lslocks: write error 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2115475 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2115475 ']' 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2115475 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115475 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115475' 00:04:33.522 killing process with pid 2115475 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2115475 00:04:33.522 17:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2115475 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2115607 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2115607 ']' 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2115607 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.456 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115607 00:04:34.713 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.713 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.713 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115607' 00:04:34.713 killing process with pid 2115607 00:04:34.713 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2115607 00:04:34.713 17:27:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2115607 00:04:34.972 00:04:34.972 real 0m4.019s 00:04:34.972 user 0m4.367s 00:04:34.972 sys 0m1.122s 00:04:34.972 17:27:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.972 17:27:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.972 ************************************ 00:04:34.972 END TEST non_locking_app_on_locked_coremask 00:04:34.972 ************************************ 00:04:34.972 17:27:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:34.972 17:27:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:34.972 17:27:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.972 17:27:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.972 17:27:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.230 ************************************ 00:04:35.230 START TEST locking_app_on_unlocked_coremask 00:04:35.230 ************************************ 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2115930 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2115930 /var/tmp/spdk.sock 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2115930 ']' 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.231 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.231 [2024-07-15 17:27:30.176848] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:35.231 [2024-07-15 17:27:30.176952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115930 ] 00:04:35.231 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.231 [2024-07-15 17:27:30.237439] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.231 [2024-07-15 17:27:30.237478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.231 [2024-07-15 17:27:30.348140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2116052 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2116052 /var/tmp/spdk2.sock 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2116052 ']' 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.489 17:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.747 [2024-07-15 17:27:30.661044] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:35.747 [2024-07-15 17:27:30.661122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116052 ] 00:04:35.747 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.747 [2024-07-15 17:27:30.754212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.006 [2024-07-15 17:27:30.992044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.574 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.574 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:36.574 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2116052 00:04:36.574 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2116052 00:04:36.574 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.140 lslocks: write error 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2115930 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2115930 ']' 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2115930 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115930 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115930' 00:04:37.140 killing process with pid 2115930 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2115930 00:04:37.140 17:27:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2115930 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2116052 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2116052 ']' 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2116052 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2116052 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2116052' 00:04:38.076 killing process with pid 2116052 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2116052 00:04:38.076 17:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2116052 00:04:38.333 00:04:38.333 real 0m3.265s 00:04:38.333 user 0m3.370s 00:04:38.333 sys 0m1.065s 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 END TEST locking_app_on_unlocked_coremask 00:04:38.333 ************************************ 00:04:38.333 17:27:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:38.333 17:27:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:38.333 17:27:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.333 17:27:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.333 17:27:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST locking_app_on_locked_coremask 00:04:38.333 ************************************ 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2116366 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2116366 /var/tmp/spdk.sock 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2116366 ']' 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.333 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.591 [2024-07-15 17:27:33.496709] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:38.591 [2024-07-15 17:27:33.496808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116366 ] 00:04:38.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.591 [2024-07-15 17:27:33.558743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.591 [2024-07-15 17:27:33.674661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2116484 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2116484 /var/tmp/spdk2.sock 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2116484 /var/tmp/spdk2.sock 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2116484 /var/tmp/spdk2.sock 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2116484 ']' 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.849 17:27:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.107 [2024-07-15 17:27:33.991168] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:39.107 [2024-07-15 17:27:33.991269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116484 ] 00:04:39.107 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.107 [2024-07-15 17:27:34.085896] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2116366 has claimed it. 00:04:39.107 [2024-07-15 17:27:34.085970] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2116484) - No such process 00:04:39.672 ERROR: process (pid: 2116484) is no longer running 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2116366 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2116366 00:04:39.672 17:27:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.237 lslocks: write error 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2116366 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2116366 ']' 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2116366 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2116366 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2116366' 00:04:40.237 killing process with pid 2116366 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2116366 00:04:40.237 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2116366 00:04:40.804 00:04:40.804 real 0m2.198s 00:04:40.804 user 0m2.376s 00:04:40.804 sys 0m0.663s 00:04:40.804 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.804 17:27:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.804 ************************************ 00:04:40.804 END TEST locking_app_on_locked_coremask 00:04:40.804 ************************************ 00:04:40.804 17:27:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:40.804 17:27:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:40.804 17:27:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.804 17:27:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.804 17:27:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.804 ************************************ 00:04:40.804 START TEST locking_overlapped_coremask 00:04:40.804 ************************************ 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2116664 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2116664 /var/tmp/spdk.sock 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2116664 ']' 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.804 17:27:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.804 [2024-07-15 17:27:35.747506] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:40.804 [2024-07-15 17:27:35.747606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116664 ] 00:04:40.804 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.804 [2024-07-15 17:27:35.810498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.804 [2024-07-15 17:27:35.929512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.804 [2024-07-15 17:27:35.929585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.804 [2024-07-15 17:27:35.929588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.736 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.736 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:41.736 17:27:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2116802 00:04:41.736 17:27:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2116802 /var/tmp/spdk2.sock 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2116802 /var/tmp/spdk2.sock 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2116802 /var/tmp/spdk2.sock 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2116802 ']' 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.737 17:27:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.737 [2024-07-15 17:27:36.739956] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:41.737 [2024-07-15 17:27:36.740055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116802 ] 00:04:41.737 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.737 [2024-07-15 17:27:36.827520] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2116664 has claimed it. 00:04:41.737 [2024-07-15 17:27:36.827588] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2116802) - No such process 00:04:42.303 ERROR: process (pid: 2116802) is no longer running 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2116664 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2116664 ']' 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2116664 00:04:42.303 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2116664 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2116664' 00:04:42.561 killing process with pid 2116664 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2116664 00:04:42.561 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2116664 00:04:42.820 00:04:42.820 real 0m2.210s 00:04:42.820 user 0m6.178s 00:04:42.820 sys 0m0.494s 00:04:42.820 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.820 17:27:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.820 ************************************ 00:04:42.820 END TEST locking_overlapped_coremask 00:04:42.820 ************************************ 00:04:42.820 17:27:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:42.820 17:27:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.820 17:27:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.820 17:27:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.820 17:27:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.079 ************************************ 00:04:43.079 START TEST locking_overlapped_coremask_via_rpc 00:04:43.079 ************************************ 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2116966 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2116966 /var/tmp/spdk.sock 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2116966 ']' 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.079 17:27:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.079 [2024-07-15 17:27:38.009053] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:43.079 [2024-07-15 17:27:38.009139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116966 ] 00:04:43.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.079 [2024-07-15 17:27:38.067272] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.079 [2024-07-15 17:27:38.067310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.079 [2024-07-15 17:27:38.175791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.079 [2024-07-15 17:27:38.175854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.079 [2024-07-15 17:27:38.175857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2117092 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2117092 /var/tmp/spdk2.sock 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2117092 ']' 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.337 17:27:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.595 [2024-07-15 17:27:38.485518] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:43.595 [2024-07-15 17:27:38.485615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117092 ] 00:04:43.595 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.595 [2024-07-15 17:27:38.571176] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.595 [2024-07-15 17:27:38.571209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.853 [2024-07-15 17:27:38.794293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.853 [2024-07-15 17:27:38.794360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:43.853 [2024-07-15 17:27:38.794362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.417 [2024-07-15 17:27:39.447985] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2116966 has claimed it. 00:04:44.417 request: 00:04:44.417 { 00:04:44.417 "method": "framework_enable_cpumask_locks", 00:04:44.417 "req_id": 1 00:04:44.417 } 00:04:44.417 Got JSON-RPC error response 00:04:44.417 response: 00:04:44.417 { 00:04:44.417 "code": -32603, 00:04:44.417 "message": "Failed to claim CPU core: 2" 00:04:44.417 } 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2116966 /var/tmp/spdk.sock 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2116966 ']' 00:04:44.417 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.418 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.418 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.418 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.418 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2117092 /var/tmp/spdk2.sock 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2117092 ']' 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.675 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.933 00:04:44.933 real 0m1.997s 00:04:44.933 user 0m1.033s 00:04:44.933 sys 0m0.180s 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.933 17:27:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.933 ************************************ 00:04:44.933 END TEST locking_overlapped_coremask_via_rpc 00:04:44.933 ************************************ 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:44.933 17:27:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:44.933 17:27:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2116966 ]] 00:04:44.933 17:27:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2116966 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2116966 ']' 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2116966 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.933 17:27:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2116966 00:04:44.933 17:27:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.933 17:27:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.933 17:27:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2116966' 00:04:44.933 killing process with pid 2116966 00:04:44.933 17:27:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2116966 00:04:44.933 17:27:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2116966 00:04:45.498 17:27:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2117092 ]] 00:04:45.498 17:27:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2117092 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2117092 ']' 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2117092 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2117092 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2117092' 00:04:45.498 killing process with pid 2117092 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2117092 00:04:45.498 17:27:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2117092 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2116966 ]] 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2116966 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2116966 ']' 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2116966 00:04:46.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2116966) - No such process 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2116966 is not found' 00:04:46.065 Process with pid 2116966 is not found 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2117092 ]] 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2117092 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2117092 ']' 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2117092 00:04:46.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2117092) - No such process 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2117092 is not found' 00:04:46.065 Process with pid 2117092 is not found 00:04:46.065 17:27:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.065 00:04:46.065 real 0m17.722s 00:04:46.065 user 0m30.897s 00:04:46.065 sys 0m5.499s 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.065 17:27:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 END TEST cpu_locks 00:04:46.065 ************************************ 00:04:46.065 17:27:40 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.065 00:04:46.065 real 0m42.035s 00:04:46.065 user 1m19.540s 00:04:46.065 sys 0m9.627s 00:04:46.065 17:27:40 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.065 17:27:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 END TEST event 00:04:46.065 ************************************ 00:04:46.065 17:27:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.065 17:27:40 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.065 17:27:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.065 17:27:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.065 17:27:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 START TEST thread 00:04:46.065 ************************************ 00:04:46.065 17:27:41 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.065 * Looking for test storage... 00:04:46.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:46.065 17:27:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.065 17:27:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:46.065 17:27:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.065 17:27:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 START TEST thread_poller_perf 00:04:46.065 ************************************ 00:04:46.065 17:27:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.065 [2024-07-15 17:27:41.093580] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:46.065 [2024-07-15 17:27:41.093632] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117464 ] 00:04:46.065 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.065 [2024-07-15 17:27:41.151298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.322 [2024-07-15 17:27:41.261641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.322 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.688 ====================================== 00:04:47.688 busy:2712724008 (cyc) 00:04:47.688 total_run_count: 293000 00:04:47.688 tsc_hz: 2700000000 (cyc) 00:04:47.688 ====================================== 00:04:47.689 poller_cost: 9258 (cyc), 3428 (nsec) 00:04:47.689 00:04:47.689 real 0m1.312s 00:04:47.689 user 0m1.231s 00:04:47.689 sys 0m0.075s 00:04:47.689 17:27:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.689 17:27:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.689 ************************************ 00:04:47.689 END TEST thread_poller_perf 00:04:47.689 ************************************ 00:04:47.689 17:27:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:47.689 17:27:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.689 17:27:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:47.689 17:27:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.689 17:27:42 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.689 ************************************ 00:04:47.689 START TEST thread_poller_perf 00:04:47.689 ************************************ 00:04:47.689 17:27:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.689 [2024-07-15 17:27:42.462586] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:47.689 [2024-07-15 17:27:42.462655] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117624 ] 00:04:47.689 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.689 [2024-07-15 17:27:42.527023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.689 [2024-07-15 17:27:42.642627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.636 ====================================== 00:04:48.636 busy:2702715404 (cyc) 00:04:48.636 total_run_count: 3877000 00:04:48.636 tsc_hz: 2700000000 (cyc) 00:04:48.636 ====================================== 00:04:48.636 poller_cost: 697 (cyc), 258 (nsec) 00:04:48.636 00:04:48.636 real 0m1.316s 00:04:48.636 user 0m1.223s 00:04:48.636 sys 0m0.087s 00:04:48.636 17:27:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.636 17:27:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.636 ************************************ 00:04:48.636 END TEST thread_poller_perf 00:04:48.636 ************************************ 00:04:48.894 17:27:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:48.894 17:27:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.894 00:04:48.894 real 0m2.772s 00:04:48.894 user 0m2.516s 00:04:48.894 sys 0m0.255s 00:04:48.894 17:27:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.894 17:27:43 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 END TEST thread 00:04:48.894 ************************************ 00:04:48.894 17:27:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.894 17:27:43 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.894 17:27:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.894 17:27:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.894 17:27:43 -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 ************************************ 00:04:48.894 START TEST accel 00:04:48.894 ************************************ 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:48.894 * Looking for test storage... 00:04:48.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:48.894 17:27:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:48.894 17:27:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:48.894 17:27:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.894 17:27:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2117932 00:04:48.894 17:27:43 accel -- accel/accel.sh@63 -- # waitforlisten 2117932 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@829 -- # '[' -z 2117932 ']' 00:04:48.894 17:27:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.894 17:27:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.894 17:27:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.894 17:27:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.894 17:27:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.894 17:27:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.894 17:27:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.894 17:27:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.894 17:27:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:48.894 17:27:43 accel -- accel/accel.sh@41 -- # jq -r . 00:04:48.894 [2024-07-15 17:27:43.932044] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:48.894 [2024-07-15 17:27:43.932132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117932 ] 00:04:48.894 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.894 [2024-07-15 17:27:43.992402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.152 [2024-07-15 17:27:44.107046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.408 17:27:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.408 17:27:44 accel -- common/autotest_common.sh@862 -- # return 0 00:04:49.408 17:27:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:49.409 17:27:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:49.409 17:27:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:49.409 17:27:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:49.409 17:27:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:49.409 17:27:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.409 17:27:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.409 17:27:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.409 17:27:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.409 17:27:44 accel -- accel/accel.sh@75 -- # killprocess 2117932 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@948 -- # '[' -z 2117932 ']' 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@952 -- # kill -0 2117932 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@953 -- # uname 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2117932 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2117932' 00:04:49.409 killing process with pid 2117932 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@967 -- # kill 2117932 00:04:49.409 17:27:44 accel -- common/autotest_common.sh@972 -- # wait 2117932 00:04:49.973 17:27:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:49.973 17:27:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.973 17:27:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:49.973 17:27:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:49.973 17:27:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.973 17:27:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:49.973 17:27:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.973 17:27:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.973 ************************************ 00:04:49.973 START TEST accel_missing_filename 00:04:49.973 ************************************ 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.973 17:27:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:49.973 17:27:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:49.973 [2024-07-15 17:27:45.007513] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:49.973 [2024-07-15 17:27:45.007580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118100 ] 00:04:49.973 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.973 [2024-07-15 17:27:45.072344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.231 [2024-07-15 17:27:45.191590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.231 [2024-07-15 17:27:45.253335] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.231 [2024-07-15 17:27:45.342021] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:50.489 A filename is required. 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.489 00:04:50.489 real 0m0.480s 00:04:50.489 user 0m0.364s 00:04:50.489 sys 0m0.150s 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.489 17:27:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:50.489 ************************************ 00:04:50.489 END TEST accel_missing_filename 00:04:50.489 ************************************ 00:04:50.489 17:27:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.489 17:27:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.489 17:27:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:50.489 17:27:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.489 17:27:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.489 ************************************ 00:04:50.489 START TEST accel_compress_verify 00:04:50.489 ************************************ 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.489 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:50.489 17:27:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:50.489 [2024-07-15 17:27:45.529767] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:50.489 [2024-07-15 17:27:45.529833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118129 ] 00:04:50.489 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.489 [2024-07-15 17:27:45.594048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.747 [2024-07-15 17:27:45.710463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.747 [2024-07-15 17:27:45.772363] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.747 [2024-07-15 17:27:45.860891] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:51.006 00:04:51.006 Compression does not support the verify option, aborting. 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.006 00:04:51.006 real 0m0.472s 00:04:51.006 user 0m0.371s 00:04:51.006 sys 0m0.134s 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.006 17:27:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 END TEST accel_compress_verify 00:04:51.006 ************************************ 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.006 17:27:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 START TEST accel_wrong_workload 00:04:51.006 ************************************ 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:51.006 17:27:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:51.006 Unsupported workload type: foobar 00:04:51.006 [2024-07-15 17:27:46.054363] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:51.006 accel_perf options: 00:04:51.006 [-h help message] 00:04:51.006 [-q queue depth per core] 00:04:51.006 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.006 [-T number of threads per core 00:04:51.006 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.006 [-t time in seconds] 00:04:51.006 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.006 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:51.006 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.006 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.006 [-S for crc32c workload, use this seed value (default 0) 00:04:51.006 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.006 [-f for fill workload, use this BYTE value (default 255) 00:04:51.006 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.006 [-y verify result if this switch is on] 00:04:51.006 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.006 Can be used to spread operations across a wider range of memory. 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.006 00:04:51.006 real 0m0.023s 00:04:51.006 user 0m0.013s 00:04:51.006 sys 0m0.010s 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.006 17:27:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 END TEST accel_wrong_workload 00:04:51.006 ************************************ 00:04:51.006 Error: writing output failed: Broken pipe 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.006 17:27:46 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.006 17:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 START TEST accel_negative_buffers 00:04:51.006 ************************************ 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:51.006 17:27:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:51.006 -x option must be non-negative. 00:04:51.006 [2024-07-15 17:27:46.119657] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:51.006 accel_perf options: 00:04:51.006 [-h help message] 00:04:51.006 [-q queue depth per core] 00:04:51.006 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.006 [-T number of threads per core 00:04:51.006 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.006 [-t time in seconds] 00:04:51.006 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.006 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:51.006 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.006 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.006 [-S for crc32c workload, use this seed value (default 0) 00:04:51.006 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.006 [-f for fill workload, use this BYTE value (default 255) 00:04:51.006 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.006 [-y verify result if this switch is on] 00:04:51.006 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.006 Can be used to spread operations across a wider range of memory. 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.006 00:04:51.006 real 0m0.023s 00:04:51.006 user 0m0.011s 00:04:51.006 sys 0m0.012s 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.006 17:27:46 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:51.006 ************************************ 00:04:51.006 END TEST accel_negative_buffers 00:04:51.006 ************************************ 00:04:51.006 Error: writing output failed: Broken pipe 00:04:51.264 17:27:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.265 17:27:46 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:51.265 17:27:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:51.265 17:27:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.265 17:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.265 ************************************ 00:04:51.265 START TEST accel_crc32c 00:04:51.265 ************************************ 00:04:51.265 17:27:46 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:51.265 17:27:46 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:51.265 [2024-07-15 17:27:46.181203] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:51.265 [2024-07-15 17:27:46.181267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118313 ] 00:04:51.265 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.265 [2024-07-15 17:27:46.247206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.265 [2024-07-15 17:27:46.366782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.523 17:27:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:52.897 17:27:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.897 00:04:52.897 real 0m1.480s 00:04:52.897 user 0m1.334s 00:04:52.897 sys 0m0.148s 00:04:52.897 17:27:47 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.897 17:27:47 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:52.897 ************************************ 00:04:52.897 END TEST accel_crc32c 00:04:52.897 ************************************ 00:04:52.897 17:27:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.897 17:27:47 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:52.897 17:27:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:52.897 17:27:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.897 17:27:47 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.897 ************************************ 00:04:52.897 START TEST accel_crc32c_C2 00:04:52.897 ************************************ 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:52.897 [2024-07-15 17:27:47.705570] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:52.897 [2024-07-15 17:27:47.705634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118470 ] 00:04:52.897 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.897 [2024-07-15 17:27:47.767948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.897 [2024-07-15 17:27:47.888143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.897 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:52.898 17:27:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.270 00:04:54.270 real 0m1.455s 00:04:54.270 user 0m1.309s 00:04:54.270 sys 0m0.146s 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.270 17:27:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 END TEST accel_crc32c_C2 00:04:54.270 ************************************ 00:04:54.270 17:27:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.270 17:27:49 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:54.270 17:27:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:54.270 17:27:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.270 17:27:49 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.271 ************************************ 00:04:54.271 START TEST accel_copy 00:04:54.271 ************************************ 00:04:54.271 17:27:49 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:54.271 17:27:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:54.271 [2024-07-15 17:27:49.205421] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:54.271 [2024-07-15 17:27:49.205486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118633 ] 00:04:54.271 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.271 [2024-07-15 17:27:49.267196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.271 [2024-07-15 17:27:49.385580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.528 17:27:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.529 17:27:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.902 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.902 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.902 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.902 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:55.903 17:27:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.903 00:04:55.903 real 0m1.470s 00:04:55.903 user 0m1.324s 00:04:55.903 sys 0m0.146s 00:04:55.903 17:27:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.903 17:27:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:55.903 ************************************ 00:04:55.903 END TEST accel_copy 00:04:55.903 ************************************ 00:04:55.903 17:27:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.903 17:27:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.903 17:27:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:55.903 17:27:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.903 17:27:50 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.903 ************************************ 00:04:55.903 START TEST accel_fill 00:04:55.903 ************************************ 00:04:55.903 17:27:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:55.903 [2024-07-15 17:27:50.717292] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:55.903 [2024-07-15 17:27:50.717361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118905 ] 00:04:55.903 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.903 [2024-07-15 17:27:50.781486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.903 [2024-07-15 17:27:50.900107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:55.903 17:27:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:57.276 17:27:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.276 00:04:57.276 real 0m1.477s 00:04:57.276 user 0m1.329s 00:04:57.276 sys 0m0.149s 00:04:57.276 17:27:52 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.276 17:27:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:57.276 ************************************ 00:04:57.276 END TEST accel_fill 00:04:57.276 ************************************ 00:04:57.276 17:27:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.276 17:27:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:57.276 17:27:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:57.276 17:27:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.276 17:27:52 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.276 ************************************ 00:04:57.276 START TEST accel_copy_crc32c 00:04:57.276 ************************************ 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:57.276 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:57.276 [2024-07-15 17:27:52.239535] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:57.276 [2024-07-15 17:27:52.239597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119058 ] 00:04:57.276 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.276 [2024-07-15 17:27:52.300533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.534 [2024-07-15 17:27:52.418969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.534 17:27:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.909 00:04:58.909 real 0m1.467s 00:04:58.909 user 0m1.325s 00:04:58.909 sys 0m0.143s 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.909 17:27:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:58.909 ************************************ 00:04:58.909 END TEST accel_copy_crc32c 00:04:58.909 ************************************ 00:04:58.909 17:27:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:58.909 17:27:53 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:58.909 17:27:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:58.909 17:27:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.909 17:27:53 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.909 ************************************ 00:04:58.909 START TEST accel_copy_crc32c_C2 00:04:58.909 ************************************ 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:58.909 [2024-07-15 17:27:53.747849] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:04:58.909 [2024-07-15 17:27:53.747944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119225 ] 00:04:58.909 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.909 [2024-07-15 17:27:53.809596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.909 [2024-07-15 17:27:53.929714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.909 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:58.910 17:27:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.285 00:05:00.285 real 0m1.471s 00:05:00.285 user 0m1.320s 00:05:00.285 sys 0m0.152s 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.285 17:27:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:00.285 ************************************ 00:05:00.285 END TEST accel_copy_crc32c_C2 00:05:00.285 ************************************ 00:05:00.285 17:27:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.285 17:27:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:00.285 17:27:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:00.285 17:27:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.285 17:27:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.285 ************************************ 00:05:00.285 START TEST accel_dualcast 00:05:00.285 ************************************ 00:05:00.285 17:27:55 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:00.285 17:27:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:00.285 [2024-07-15 17:27:55.262037] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:00.285 [2024-07-15 17:27:55.262095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119493 ] 00:05:00.285 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.285 [2024-07-15 17:27:55.324030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.544 [2024-07-15 17:27:55.443566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:00.544 17:27:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.963 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:01.964 17:27:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.964 00:05:01.964 real 0m1.475s 00:05:01.964 user 0m1.332s 00:05:01.964 sys 0m0.142s 00:05:01.964 17:27:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.964 17:27:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:01.964 ************************************ 00:05:01.964 END TEST accel_dualcast 00:05:01.964 ************************************ 00:05:01.964 17:27:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.964 17:27:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:01.964 17:27:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:01.964 17:27:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.964 17:27:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.964 ************************************ 00:05:01.964 START TEST accel_compare 00:05:01.964 ************************************ 00:05:01.964 17:27:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:01.964 17:27:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:01.964 [2024-07-15 17:27:56.778605] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:01.964 [2024-07-15 17:27:56.778662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119652 ] 00:05:01.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.964 [2024-07-15 17:27:56.839606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.964 [2024-07-15 17:27:56.957778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:01.964 17:27:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.336 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:03.337 17:27:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.337 00:05:03.337 real 0m1.467s 00:05:03.337 user 0m1.332s 00:05:03.337 sys 0m0.135s 00:05:03.337 17:27:58 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.337 17:27:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:03.337 ************************************ 00:05:03.337 END TEST accel_compare 00:05:03.337 ************************************ 00:05:03.337 17:27:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.337 17:27:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:03.337 17:27:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.337 17:27:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.337 17:27:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.337 ************************************ 00:05:03.337 START TEST accel_xor 00:05:03.337 ************************************ 00:05:03.337 17:27:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:03.337 17:27:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:03.337 [2024-07-15 17:27:58.289307] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:03.337 [2024-07-15 17:27:58.289372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119814 ] 00:05:03.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.337 [2024-07-15 17:27:58.353853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.614 [2024-07-15 17:27:58.478519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.614 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:03.615 17:27:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.991 00:05:04.991 real 0m1.484s 00:05:04.991 user 0m1.346s 00:05:04.991 sys 0m0.139s 00:05:04.991 17:27:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.991 17:27:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:04.991 ************************************ 00:05:04.991 END TEST accel_xor 00:05:04.991 ************************************ 00:05:04.991 17:27:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.991 17:27:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:04.991 17:27:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:04.991 17:27:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.991 17:27:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.991 ************************************ 00:05:04.991 START TEST accel_xor 00:05:04.991 ************************************ 00:05:04.991 17:27:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:04.991 17:27:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:04.991 [2024-07-15 17:27:59.816918] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:04.991 [2024-07-15 17:27:59.816982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120087 ] 00:05:04.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.991 [2024-07-15 17:27:59.878411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.991 [2024-07-15 17:28:00.001282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.991 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.992 17:28:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.360 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:06.361 17:28:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.361 00:05:06.361 real 0m1.489s 00:05:06.361 user 0m1.344s 00:05:06.361 sys 0m0.147s 00:05:06.361 17:28:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.361 17:28:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:06.361 ************************************ 00:05:06.361 END TEST accel_xor 00:05:06.361 ************************************ 00:05:06.361 17:28:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.361 17:28:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:06.361 17:28:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:06.361 17:28:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.361 17:28:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.361 ************************************ 00:05:06.361 START TEST accel_dif_verify 00:05:06.361 ************************************ 00:05:06.361 17:28:01 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:06.361 17:28:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:06.361 [2024-07-15 17:28:01.351132] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:06.361 [2024-07-15 17:28:01.351199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120240 ] 00:05:06.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.361 [2024-07-15 17:28:01.413817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.618 [2024-07-15 17:28:01.542113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:06.618 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:06.619 17:28:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:07.992 17:28:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.992 00:05:07.992 real 0m1.494s 00:05:07.992 user 0m1.348s 00:05:07.992 sys 0m0.148s 00:05:07.992 17:28:02 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.992 17:28:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:07.992 ************************************ 00:05:07.992 END TEST accel_dif_verify 00:05:07.992 ************************************ 00:05:07.992 17:28:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.992 17:28:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:07.992 17:28:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:07.992 17:28:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.992 17:28:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.992 ************************************ 00:05:07.992 START TEST accel_dif_generate 00:05:07.992 ************************************ 00:05:07.992 17:28:02 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:07.992 17:28:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:07.992 [2024-07-15 17:28:02.895995] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:07.992 [2024-07-15 17:28:02.896063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120406 ] 00:05:07.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.992 [2024-07-15 17:28:02.955299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.992 [2024-07-15 17:28:03.077900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:08.252 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.253 17:28:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:09.627 17:28:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.627 00:05:09.627 real 0m1.478s 00:05:09.627 user 0m1.345s 00:05:09.627 sys 0m0.137s 00:05:09.627 17:28:04 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.627 17:28:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:09.627 ************************************ 00:05:09.627 END TEST accel_dif_generate 00:05:09.627 ************************************ 00:05:09.627 17:28:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.627 17:28:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:09.627 17:28:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:09.627 17:28:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.627 17:28:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.627 ************************************ 00:05:09.627 START TEST accel_dif_generate_copy 00:05:09.627 ************************************ 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:09.627 [2024-07-15 17:28:04.414874] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:09.627 [2024-07-15 17:28:04.414967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120675 ] 00:05:09.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.627 [2024-07-15 17:28:04.478634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.627 [2024-07-15 17:28:04.601680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:09.627 17:28:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.003 00:05:11.003 real 0m1.490s 00:05:11.003 user 0m1.354s 00:05:11.003 sys 0m0.138s 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.003 17:28:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:11.003 ************************************ 00:05:11.003 END TEST accel_dif_generate_copy 00:05:11.003 ************************************ 00:05:11.003 17:28:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.003 17:28:05 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:11.003 17:28:05 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.003 17:28:05 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:11.003 17:28:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.003 17:28:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.003 ************************************ 00:05:11.003 START TEST accel_comp 00:05:11.003 ************************************ 00:05:11.003 17:28:05 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:11.003 17:28:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:11.003 [2024-07-15 17:28:05.950418] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:11.003 [2024-07-15 17:28:05.950486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120832 ] 00:05:11.003 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.003 [2024-07-15 17:28:06.013532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.003 [2024-07-15 17:28:06.136673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:11.262 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.263 17:28:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:12.637 17:28:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.637 00:05:12.637 real 0m1.483s 00:05:12.637 user 0m1.337s 00:05:12.637 sys 0m0.148s 00:05:12.637 17:28:07 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.637 17:28:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:12.637 ************************************ 00:05:12.637 END TEST accel_comp 00:05:12.637 ************************************ 00:05:12.637 17:28:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.637 17:28:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.637 17:28:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:12.637 17:28:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.637 17:28:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.637 ************************************ 00:05:12.637 START TEST accel_decomp 00:05:12.637 ************************************ 00:05:12.637 17:28:07 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:12.637 17:28:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:12.637 [2024-07-15 17:28:07.476851] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:12.637 [2024-07-15 17:28:07.476935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120994 ] 00:05:12.638 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.638 [2024-07-15 17:28:07.537536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.638 [2024-07-15 17:28:07.659957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:12.638 17:28:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:14.013 17:28:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.013 00:05:14.013 real 0m1.478s 00:05:14.013 user 0m1.336s 00:05:14.013 sys 0m0.144s 00:05:14.013 17:28:08 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.013 17:28:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:14.013 ************************************ 00:05:14.013 END TEST accel_decomp 00:05:14.013 ************************************ 00:05:14.013 17:28:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.013 17:28:08 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.013 17:28:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:14.013 17:28:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.013 17:28:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.013 ************************************ 00:05:14.013 START TEST accel_decomp_full 00:05:14.013 ************************************ 00:05:14.013 17:28:08 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:14.013 17:28:08 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:14.013 [2024-07-15 17:28:08.999996] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:14.013 [2024-07-15 17:28:09.000062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121267 ] 00:05:14.013 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.013 [2024-07-15 17:28:09.061773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.272 [2024-07-15 17:28:09.185480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.272 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.273 17:28:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.647 17:28:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.647 00:05:15.647 real 0m1.501s 00:05:15.647 user 0m1.359s 00:05:15.647 sys 0m0.144s 00:05:15.647 17:28:10 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.647 17:28:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:15.647 ************************************ 00:05:15.647 END TEST accel_decomp_full 00:05:15.647 ************************************ 00:05:15.647 17:28:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.647 17:28:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.647 17:28:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:15.647 17:28:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.647 17:28:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.647 ************************************ 00:05:15.647 START TEST accel_decomp_mcore 00:05:15.647 ************************************ 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:15.647 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:15.647 [2024-07-15 17:28:10.544636] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:15.647 [2024-07-15 17:28:10.544701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121421 ] 00:05:15.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.647 [2024-07-15 17:28:10.606472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.647 [2024-07-15 17:28:10.733168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.647 [2024-07-15 17:28:10.733221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.647 [2024-07-15 17:28:10.733274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.647 [2024-07-15 17:28:10.733278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:15.906 17:28:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.278 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.278 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.278 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.278 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.279 00:05:17.279 real 0m1.496s 00:05:17.279 user 0m4.806s 00:05:17.279 sys 0m0.151s 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.279 17:28:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 ************************************ 00:05:17.279 END TEST accel_decomp_mcore 00:05:17.279 ************************************ 00:05:17.279 17:28:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.279 17:28:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.279 17:28:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:17.279 17:28:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.279 17:28:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 ************************************ 00:05:17.279 START TEST accel_decomp_full_mcore 00:05:17.279 ************************************ 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:17.279 [2024-07-15 17:28:12.087613] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:17.279 [2024-07-15 17:28:12.087664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121590 ] 00:05:17.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.279 [2024-07-15 17:28:12.150665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.279 [2024-07-15 17:28:12.275486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.279 [2024-07-15 17:28:12.275552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.279 [2024-07-15 17:28:12.275604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.279 [2024-07-15 17:28:12.275607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.279 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.280 17:28:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.654 00:05:18.654 real 0m1.512s 00:05:18.654 user 0m4.867s 00:05:18.654 sys 0m0.155s 00:05:18.654 17:28:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.655 17:28:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 END TEST accel_decomp_full_mcore 00:05:18.655 ************************************ 00:05:18.655 17:28:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.655 17:28:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.655 17:28:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:18.655 17:28:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.655 17:28:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.655 ************************************ 00:05:18.655 START TEST accel_decomp_mthread 00:05:18.655 ************************************ 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:18.655 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:18.655 [2024-07-15 17:28:13.654886] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:18.655 [2024-07-15 17:28:13.654960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121863 ] 00:05:18.655 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.655 [2024-07-15 17:28:13.719393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.913 [2024-07-15 17:28:13.840871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:18.913 17:28:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.286 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.287 00:05:20.287 real 0m1.500s 00:05:20.287 user 0m1.355s 00:05:20.287 sys 0m0.147s 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.287 17:28:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:20.287 ************************************ 00:05:20.287 END TEST accel_decomp_mthread 00:05:20.287 ************************************ 00:05:20.287 17:28:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.287 17:28:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.287 17:28:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:20.287 17:28:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.287 17:28:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.287 ************************************ 00:05:20.287 START TEST accel_decomp_full_mthread 00:05:20.287 ************************************ 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:20.287 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:20.287 [2024-07-15 17:28:15.203233] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:20.287 [2024-07-15 17:28:15.203309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122020 ] 00:05:20.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.287 [2024-07-15 17:28:15.265657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.287 [2024-07-15 17:28:15.386764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.544 17:28:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.917 00:05:21.917 real 0m1.521s 00:05:21.917 user 0m1.386s 00:05:21.917 sys 0m0.137s 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.917 17:28:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:21.917 ************************************ 00:05:21.917 END TEST accel_decomp_full_mthread 00:05:21.917 ************************************ 00:05:21.917 17:28:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.917 17:28:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:21.917 17:28:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:21.917 17:28:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:21.917 17:28:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:21.917 17:28:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.917 17:28:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.917 17:28:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.917 17:28:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.917 17:28:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.917 17:28:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.917 17:28:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.917 17:28:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:21.917 17:28:16 accel -- accel/accel.sh@41 -- # jq -r . 00:05:21.917 ************************************ 00:05:21.917 START TEST accel_dif_functional_tests 00:05:21.917 ************************************ 00:05:21.917 17:28:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:21.917 [2024-07-15 17:28:16.786982] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:21.917 [2024-07-15 17:28:16.787054] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122304 ] 00:05:21.917 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.917 [2024-07-15 17:28:16.843829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.917 [2024-07-15 17:28:16.966863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.917 [2024-07-15 17:28:16.966916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.917 [2024-07-15 17:28:16.966921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.175 00:05:22.175 00:05:22.175 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.175 http://cunit.sourceforge.net/ 00:05:22.175 00:05:22.175 00:05:22.175 Suite: accel_dif 00:05:22.175 Test: verify: DIF generated, GUARD check ...passed 00:05:22.175 Test: verify: DIF generated, APPTAG check ...passed 00:05:22.175 Test: verify: DIF generated, REFTAG check ...passed 00:05:22.175 Test: verify: DIF not generated, GUARD check ...[2024-07-15 17:28:17.069128] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.175 passed 00:05:22.175 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 17:28:17.069200] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.175 passed 00:05:22.175 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 17:28:17.069237] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.175 passed 00:05:22.175 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:22.175 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 17:28:17.069318] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:22.175 passed 00:05:22.175 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:22.175 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:22.175 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:22.175 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 17:28:17.069489] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:22.175 passed 00:05:22.175 Test: verify copy: DIF generated, GUARD check ...passed 00:05:22.175 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:22.175 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:22.175 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 17:28:17.069667] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.175 passed 00:05:22.175 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 17:28:17.069710] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.175 passed 00:05:22.175 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 17:28:17.069749] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.175 passed 00:05:22.175 Test: generate copy: DIF generated, GUARD check ...passed 00:05:22.175 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:22.175 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:22.175 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:22.175 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:22.175 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:22.175 Test: generate copy: iovecs-len validate ...[2024-07-15 17:28:17.070016] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:22.175 passed 00:05:22.175 Test: generate copy: buffer alignment validate ...passed 00:05:22.175 00:05:22.175 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.176 suites 1 1 n/a 0 0 00:05:22.176 tests 26 26 26 0 0 00:05:22.176 asserts 115 115 115 0 n/a 00:05:22.176 00:05:22.176 Elapsed time = 0.003 seconds 00:05:22.433 00:05:22.433 real 0m0.595s 00:05:22.433 user 0m0.908s 00:05:22.433 sys 0m0.181s 00:05:22.433 17:28:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.433 17:28:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:22.433 ************************************ 00:05:22.433 END TEST accel_dif_functional_tests 00:05:22.433 ************************************ 00:05:22.433 17:28:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.433 00:05:22.433 real 0m33.534s 00:05:22.433 user 0m36.994s 00:05:22.433 sys 0m4.583s 00:05:22.433 17:28:17 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.433 17:28:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.433 ************************************ 00:05:22.433 END TEST accel 00:05:22.433 ************************************ 00:05:22.433 17:28:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.433 17:28:17 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:22.433 17:28:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.433 17:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.433 17:28:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.433 ************************************ 00:05:22.433 START TEST accel_rpc 00:05:22.433 ************************************ 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:22.433 * Looking for test storage... 00:05:22.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:22.433 17:28:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.433 17:28:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2122366 00:05:22.433 17:28:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:22.433 17:28:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2122366 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2122366 ']' 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.433 17:28:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.433 [2024-07-15 17:28:17.522313] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:22.433 [2024-07-15 17:28:17.522392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122366 ] 00:05:22.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.691 [2024-07-15 17:28:17.580117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.691 [2024-07-15 17:28:17.686330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.691 17:28:17 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.691 17:28:17 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.691 17:28:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:22.691 17:28:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:22.691 17:28:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:22.691 17:28:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:22.691 17:28:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:22.691 17:28:17 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.691 17:28:17 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.691 17:28:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.691 ************************************ 00:05:22.691 START TEST accel_assign_opcode 00:05:22.691 ************************************ 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.691 [2024-07-15 17:28:17.754949] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.691 [2024-07-15 17:28:17.762952] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.691 17:28:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.949 software 00:05:22.949 00:05:22.949 real 0m0.296s 00:05:22.949 user 0m0.031s 00:05:22.949 sys 0m0.009s 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.949 17:28:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:22.949 ************************************ 00:05:22.949 END TEST accel_assign_opcode 00:05:22.949 ************************************ 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.949 17:28:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2122366 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2122366 ']' 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2122366 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.949 17:28:18 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2122366 00:05:23.207 17:28:18 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.207 17:28:18 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.207 17:28:18 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2122366' 00:05:23.207 killing process with pid 2122366 00:05:23.207 17:28:18 accel_rpc -- common/autotest_common.sh@967 -- # kill 2122366 00:05:23.207 17:28:18 accel_rpc -- common/autotest_common.sh@972 -- # wait 2122366 00:05:23.465 00:05:23.465 real 0m1.168s 00:05:23.465 user 0m1.093s 00:05:23.465 sys 0m0.423s 00:05:23.465 17:28:18 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.465 17:28:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.465 ************************************ 00:05:23.465 END TEST accel_rpc 00:05:23.465 ************************************ 00:05:23.723 17:28:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.723 17:28:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.723 17:28:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.723 17:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.723 17:28:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.723 ************************************ 00:05:23.723 START TEST app_cmdline 00:05:23.723 ************************************ 00:05:23.723 17:28:18 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.723 * Looking for test storage... 00:05:23.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.723 17:28:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:23.723 17:28:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2122572 00:05:23.723 17:28:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:23.723 17:28:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2122572 00:05:23.723 17:28:18 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2122572 ']' 00:05:23.723 17:28:18 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.723 17:28:18 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.723 17:28:18 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.724 17:28:18 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.724 17:28:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.724 [2024-07-15 17:28:18.741299] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:23.724 [2024-07-15 17:28:18.741385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122572 ] 00:05:23.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.724 [2024-07-15 17:28:18.800417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.980 [2024-07-15 17:28:18.908565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.237 17:28:19 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.237 17:28:19 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:24.237 17:28:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:24.494 { 00:05:24.494 "version": "SPDK v24.09-pre git sha1 d8f06a5fe", 00:05:24.494 "fields": { 00:05:24.494 "major": 24, 00:05:24.494 "minor": 9, 00:05:24.494 "patch": 0, 00:05:24.494 "suffix": "-pre", 00:05:24.494 "commit": "d8f06a5fe" 00:05:24.494 } 00:05:24.494 } 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:24.494 17:28:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.494 17:28:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:24.494 17:28:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:24.494 17:28:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.494 17:28:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:24.494 17:28:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:24.495 17:28:19 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.752 request: 00:05:24.752 { 00:05:24.752 "method": "env_dpdk_get_mem_stats", 00:05:24.752 "req_id": 1 00:05:24.752 } 00:05:24.752 Got JSON-RPC error response 00:05:24.752 response: 00:05:24.752 { 00:05:24.752 "code": -32601, 00:05:24.752 "message": "Method not found" 00:05:24.752 } 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.752 17:28:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2122572 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2122572 ']' 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2122572 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2122572 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2122572' 00:05:24.752 killing process with pid 2122572 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@967 -- # kill 2122572 00:05:24.752 17:28:19 app_cmdline -- common/autotest_common.sh@972 -- # wait 2122572 00:05:25.316 00:05:25.316 real 0m1.563s 00:05:25.316 user 0m1.883s 00:05:25.316 sys 0m0.478s 00:05:25.316 17:28:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.316 17:28:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.316 ************************************ 00:05:25.316 END TEST app_cmdline 00:05:25.316 ************************************ 00:05:25.316 17:28:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.316 17:28:20 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.316 17:28:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.316 17:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.316 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.316 ************************************ 00:05:25.316 START TEST version 00:05:25.316 ************************************ 00:05:25.316 17:28:20 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:25.316 * Looking for test storage... 00:05:25.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.316 17:28:20 version -- app/version.sh@17 -- # get_header_version major 00:05:25.316 17:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.316 17:28:20 version -- app/version.sh@17 -- # major=24 00:05:25.316 17:28:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:25.316 17:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.316 17:28:20 version -- app/version.sh@18 -- # minor=9 00:05:25.316 17:28:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:25.316 17:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.316 17:28:20 version -- app/version.sh@19 -- # patch=0 00:05:25.316 17:28:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:25.316 17:28:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # cut -f2 00:05:25.316 17:28:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.316 17:28:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:25.316 17:28:20 version -- app/version.sh@22 -- # version=24.9 00:05:25.316 17:28:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:25.316 17:28:20 version -- app/version.sh@28 -- # version=24.9rc0 00:05:25.316 17:28:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:25.316 17:28:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:25.316 17:28:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:25.316 17:28:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:25.316 00:05:25.316 real 0m0.103s 00:05:25.316 user 0m0.059s 00:05:25.316 sys 0m0.064s 00:05:25.316 17:28:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.316 17:28:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:25.316 ************************************ 00:05:25.316 END TEST version 00:05:25.316 ************************************ 00:05:25.316 17:28:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.316 17:28:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:25.316 17:28:20 -- spdk/autotest.sh@198 -- # uname -s 00:05:25.316 17:28:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:25.316 17:28:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:25.316 17:28:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:25.316 17:28:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:25.316 17:28:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:25.316 17:28:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:25.317 17:28:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.317 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.317 17:28:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:25.317 17:28:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:25.317 17:28:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:25.317 17:28:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:25.317 17:28:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:25.317 17:28:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:25.317 17:28:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:25.317 17:28:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:25.317 17:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.317 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.317 ************************************ 00:05:25.317 START TEST nvmf_tcp 00:05:25.317 ************************************ 00:05:25.317 17:28:20 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:25.574 * Looking for test storage... 00:05:25.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.574 17:28:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.575 17:28:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.575 17:28:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.575 17:28:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.575 17:28:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:25.575 17:28:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:25.575 17:28:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.575 17:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:25.575 17:28:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.575 17:28:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:25.575 17:28:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.575 17:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.575 ************************************ 00:05:25.575 START TEST nvmf_example 00:05:25.575 ************************************ 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:25.575 * Looking for test storage... 00:05:25.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:25.575 17:28:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:27.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:27.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:27.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:27.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:27.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:27.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:05:27.473 00:05:27.473 --- 10.0.0.2 ping statistics --- 00:05:27.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.473 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:27.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:27.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:05:27.473 00:05:27.473 --- 10.0.0.1 ping statistics --- 00:05:27.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.473 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:27.473 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2124592 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2124592 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2124592 ']' 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.474 17:28:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.732 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:28.663 17:28:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:28.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.908 Initializing NVMe Controllers 00:05:40.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:40.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:40.908 Initialization complete. Launching workers. 00:05:40.908 ======================================================== 00:05:40.908 Latency(us) 00:05:40.908 Device Information : IOPS MiB/s Average min max 00:05:40.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14876.89 58.11 4302.05 871.51 24363.15 00:05:40.908 ======================================================== 00:05:40.908 Total : 14876.89 58.11 4302.05 871.51 24363.15 00:05:40.908 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:40.908 rmmod nvme_tcp 00:05:40.908 rmmod nvme_fabrics 00:05:40.908 rmmod nvme_keyring 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2124592 ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2124592 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2124592 ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2124592 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2124592 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2124592' 00:05:40.908 killing process with pid 2124592 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2124592 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2124592 00:05:40.908 nvmf threads initialize successfully 00:05:40.908 bdev subsystem init successfully 00:05:40.908 created a nvmf target service 00:05:40.908 create targets's poll groups done 00:05:40.908 all subsystems of target started 00:05:40.908 nvmf target is running 00:05:40.908 all subsystems of target stopped 00:05:40.908 destroy targets's poll groups done 00:05:40.908 destroyed the nvmf target service 00:05:40.908 bdev subsystem finish successfully 00:05:40.908 nvmf threads destroy successfully 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:40.908 17:28:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.480 00:05:41.480 real 0m15.920s 00:05:41.480 user 0m45.728s 00:05:41.480 sys 0m3.194s 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.480 17:28:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:41.480 ************************************ 00:05:41.480 END TEST nvmf_example 00:05:41.480 ************************************ 00:05:41.480 17:28:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:41.480 17:28:36 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:41.480 17:28:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:41.480 17:28:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.480 17:28:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.480 ************************************ 00:05:41.480 START TEST nvmf_filesystem 00:05:41.480 ************************************ 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:41.480 * Looking for test storage... 00:05:41.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:41.480 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:41.481 #define SPDK_CONFIG_H 00:05:41.481 #define SPDK_CONFIG_APPS 1 00:05:41.481 #define SPDK_CONFIG_ARCH native 00:05:41.481 #undef SPDK_CONFIG_ASAN 00:05:41.481 #undef SPDK_CONFIG_AVAHI 00:05:41.481 #undef SPDK_CONFIG_CET 00:05:41.481 #define SPDK_CONFIG_COVERAGE 1 00:05:41.481 #define SPDK_CONFIG_CROSS_PREFIX 00:05:41.481 #undef SPDK_CONFIG_CRYPTO 00:05:41.481 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:41.481 #undef SPDK_CONFIG_CUSTOMOCF 00:05:41.481 #undef SPDK_CONFIG_DAOS 00:05:41.481 #define SPDK_CONFIG_DAOS_DIR 00:05:41.481 #define SPDK_CONFIG_DEBUG 1 00:05:41.481 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:41.481 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:41.481 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:41.481 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:41.481 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:41.481 #undef SPDK_CONFIG_DPDK_UADK 00:05:41.481 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:41.481 #define SPDK_CONFIG_EXAMPLES 1 00:05:41.481 #undef SPDK_CONFIG_FC 00:05:41.481 #define SPDK_CONFIG_FC_PATH 00:05:41.481 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:41.481 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:41.481 #undef SPDK_CONFIG_FUSE 00:05:41.481 #undef SPDK_CONFIG_FUZZER 00:05:41.481 #define SPDK_CONFIG_FUZZER_LIB 00:05:41.481 #undef SPDK_CONFIG_GOLANG 00:05:41.481 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:41.481 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:41.481 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:41.481 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:41.481 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:41.481 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:41.481 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:41.481 #define SPDK_CONFIG_IDXD 1 00:05:41.481 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:41.481 #undef SPDK_CONFIG_IPSEC_MB 00:05:41.481 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:41.481 #define SPDK_CONFIG_ISAL 1 00:05:41.481 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:41.481 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:41.481 #define SPDK_CONFIG_LIBDIR 00:05:41.481 #undef SPDK_CONFIG_LTO 00:05:41.481 #define SPDK_CONFIG_MAX_LCORES 128 00:05:41.481 #define SPDK_CONFIG_NVME_CUSE 1 00:05:41.481 #undef SPDK_CONFIG_OCF 00:05:41.481 #define SPDK_CONFIG_OCF_PATH 00:05:41.481 #define SPDK_CONFIG_OPENSSL_PATH 00:05:41.481 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:41.481 #define SPDK_CONFIG_PGO_DIR 00:05:41.481 #undef SPDK_CONFIG_PGO_USE 00:05:41.481 #define SPDK_CONFIG_PREFIX /usr/local 00:05:41.481 #undef SPDK_CONFIG_RAID5F 00:05:41.481 #undef SPDK_CONFIG_RBD 00:05:41.481 #define SPDK_CONFIG_RDMA 1 00:05:41.481 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:41.481 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:41.481 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:41.481 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:41.481 #define SPDK_CONFIG_SHARED 1 00:05:41.481 #undef SPDK_CONFIG_SMA 00:05:41.481 #define SPDK_CONFIG_TESTS 1 00:05:41.481 #undef SPDK_CONFIG_TSAN 00:05:41.481 #define SPDK_CONFIG_UBLK 1 00:05:41.481 #define SPDK_CONFIG_UBSAN 1 00:05:41.481 #undef SPDK_CONFIG_UNIT_TESTS 00:05:41.481 #undef SPDK_CONFIG_URING 00:05:41.481 #define SPDK_CONFIG_URING_PATH 00:05:41.481 #undef SPDK_CONFIG_URING_ZNS 00:05:41.481 #undef SPDK_CONFIG_USDT 00:05:41.481 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:41.481 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:41.481 #define SPDK_CONFIG_VFIO_USER 1 00:05:41.481 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:41.481 #define SPDK_CONFIG_VHOST 1 00:05:41.481 #define SPDK_CONFIG_VIRTIO 1 00:05:41.481 #undef SPDK_CONFIG_VTUNE 00:05:41.481 #define SPDK_CONFIG_VTUNE_DIR 00:05:41.481 #define SPDK_CONFIG_WERROR 1 00:05:41.481 #define SPDK_CONFIG_WPDK_DIR 00:05:41.481 #undef SPDK_CONFIG_XNVME 00:05:41.481 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:41.481 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.482 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2126303 ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2126303 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.jd8UhB 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jd8UhB/tests/target /tmp/spdk.jd8UhB 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55570874368 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6423818240 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:41.483 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996250624 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1097728 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:41.743 * Looking for test storage... 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55570874368 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8638410752 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.743 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:41.744 17:28:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:43.649 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:43.649 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:43.649 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:43.649 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:43.649 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:43.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:43.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:05:43.908 00:05:43.908 --- 10.0.0.2 ping statistics --- 00:05:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.908 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:43.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:43.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:05:43.908 00:05:43.908 --- 10.0.0.1 ping statistics --- 00:05:43.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.908 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:43.908 ************************************ 00:05:43.908 START TEST nvmf_filesystem_no_in_capsule 00:05:43.908 ************************************ 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2127930 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2127930 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2127930 ']' 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.908 17:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:43.908 [2024-07-15 17:28:38.946086] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:43.908 [2024-07-15 17:28:38.946175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:43.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.908 [2024-07-15 17:28:39.016497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.166 [2024-07-15 17:28:39.143059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.166 [2024-07-15 17:28:39.143114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.166 [2024-07-15 17:28:39.143138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.166 [2024-07-15 17:28:39.143151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.166 [2024-07-15 17:28:39.143163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.166 [2024-07-15 17:28:39.143275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.166 [2024-07-15 17:28:39.143328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.166 [2024-07-15 17:28:39.143382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.166 [2024-07-15 17:28:39.143385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 [2024-07-15 17:28:39.970054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 Malloc1 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 [2024-07-15 17:28:40.159332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:45.100 { 00:05:45.100 "name": "Malloc1", 00:05:45.100 "aliases": [ 00:05:45.100 "2704ace2-1d49-46f2-8af0-9b6ec996de37" 00:05:45.100 ], 00:05:45.100 "product_name": "Malloc disk", 00:05:45.100 "block_size": 512, 00:05:45.100 "num_blocks": 1048576, 00:05:45.100 "uuid": "2704ace2-1d49-46f2-8af0-9b6ec996de37", 00:05:45.100 "assigned_rate_limits": { 00:05:45.100 "rw_ios_per_sec": 0, 00:05:45.100 "rw_mbytes_per_sec": 0, 00:05:45.100 "r_mbytes_per_sec": 0, 00:05:45.100 "w_mbytes_per_sec": 0 00:05:45.100 }, 00:05:45.100 "claimed": true, 00:05:45.100 "claim_type": "exclusive_write", 00:05:45.100 "zoned": false, 00:05:45.100 "supported_io_types": { 00:05:45.100 "read": true, 00:05:45.100 "write": true, 00:05:45.100 "unmap": true, 00:05:45.100 "flush": true, 00:05:45.100 "reset": true, 00:05:45.100 "nvme_admin": false, 00:05:45.100 "nvme_io": false, 00:05:45.100 "nvme_io_md": false, 00:05:45.100 "write_zeroes": true, 00:05:45.100 "zcopy": true, 00:05:45.100 "get_zone_info": false, 00:05:45.100 "zone_management": false, 00:05:45.100 "zone_append": false, 00:05:45.100 "compare": false, 00:05:45.100 "compare_and_write": false, 00:05:45.100 "abort": true, 00:05:45.100 "seek_hole": false, 00:05:45.100 "seek_data": false, 00:05:45.100 "copy": true, 00:05:45.100 "nvme_iov_md": false 00:05:45.100 }, 00:05:45.100 "memory_domains": [ 00:05:45.100 { 00:05:45.100 "dma_device_id": "system", 00:05:45.100 "dma_device_type": 1 00:05:45.100 }, 00:05:45.100 { 00:05:45.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.100 "dma_device_type": 2 00:05:45.100 } 00:05:45.100 ], 00:05:45.100 "driver_specific": {} 00:05:45.100 } 00:05:45.100 ]' 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:45.100 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:45.358 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:45.358 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:45.358 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:45.358 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:45.358 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:45.925 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:45.925 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:45.925 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:45.925 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:45.925 17:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:47.820 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:47.820 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:47.820 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:48.077 17:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:48.333 17:28:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:48.894 17:28:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:49.825 ************************************ 00:05:49.825 START TEST filesystem_ext4 00:05:49.825 ************************************ 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:05:49.825 17:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:49.825 mke2fs 1.46.5 (30-Dec-2021) 00:05:50.082 Discarding device blocks: 0/522240 done 00:05:50.082 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:50.082 Filesystem UUID: f2f136b2-fb3e-4063-8ae9-bfc601f24c6b 00:05:50.082 Superblock backups stored on blocks: 00:05:50.082 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:50.082 00:05:50.082 Allocating group tables: 0/64 done 00:05:50.082 Writing inode tables: 0/64 done 00:05:52.605 Creating journal (8192 blocks): done 00:05:52.605 Writing superblocks and filesystem accounting information: 0/64 done 00:05:52.605 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2127930 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:52.606 00:05:52.606 real 0m2.541s 00:05:52.606 user 0m0.021s 00:05:52.606 sys 0m0.056s 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:52.606 ************************************ 00:05:52.606 END TEST filesystem_ext4 00:05:52.606 ************************************ 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:52.606 ************************************ 00:05:52.606 START TEST filesystem_btrfs 00:05:52.606 ************************************ 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:05:52.606 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:52.864 btrfs-progs v6.6.2 00:05:52.864 See https://btrfs.readthedocs.io for more information. 00:05:52.864 00:05:52.864 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:52.864 NOTE: several default settings have changed in version 5.15, please make sure 00:05:52.864 this does not affect your deployments: 00:05:52.864 - DUP for metadata (-m dup) 00:05:52.864 - enabled no-holes (-O no-holes) 00:05:52.864 - enabled free-space-tree (-R free-space-tree) 00:05:52.864 00:05:52.864 Label: (null) 00:05:52.864 UUID: c2522a5a-e502-40b1-91dd-ac54c281647d 00:05:52.864 Node size: 16384 00:05:52.864 Sector size: 4096 00:05:52.864 Filesystem size: 510.00MiB 00:05:52.864 Block group profiles: 00:05:52.864 Data: single 8.00MiB 00:05:52.864 Metadata: DUP 32.00MiB 00:05:52.864 System: DUP 8.00MiB 00:05:52.864 SSD detected: yes 00:05:52.864 Zoned device: no 00:05:52.864 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:52.864 Runtime features: free-space-tree 00:05:52.864 Checksum: crc32c 00:05:52.864 Number of devices: 1 00:05:52.864 Devices: 00:05:52.864 ID SIZE PATH 00:05:52.864 1 510.00MiB /dev/nvme0n1p1 00:05:52.864 00:05:52.864 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:05:52.864 17:28:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2127930 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:53.825 00:05:53.825 real 0m1.222s 00:05:53.825 user 0m0.027s 00:05:53.825 sys 0m0.106s 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:53.825 ************************************ 00:05:53.825 END TEST filesystem_btrfs 00:05:53.825 ************************************ 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:53.825 ************************************ 00:05:53.825 START TEST filesystem_xfs 00:05:53.825 ************************************ 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:05:53.825 17:28:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:53.825 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:53.825 = sectsz=512 attr=2, projid32bit=1 00:05:53.825 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:53.825 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:53.825 data = bsize=4096 blocks=130560, imaxpct=25 00:05:53.825 = sunit=0 swidth=0 blks 00:05:53.825 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:53.825 log =internal log bsize=4096 blocks=16384, version=2 00:05:53.825 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:53.825 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:54.757 Discarding blocks...Done. 00:05:54.757 17:28:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:05:54.757 17:28:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:57.323 17:28:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:57.323 00:05:57.323 real 0m3.318s 00:05:57.323 user 0m0.023s 00:05:57.323 sys 0m0.056s 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:57.323 ************************************ 00:05:57.323 END TEST filesystem_xfs 00:05:57.323 ************************************ 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:57.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2127930 ']' 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2127930' 00:05:57.323 killing process with pid 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2127930 00:05:57.323 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2127930 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:57.891 00:05:57.891 real 0m13.845s 00:05:57.891 user 0m53.246s 00:05:57.891 sys 0m1.972s 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 ************************************ 00:05:57.891 END TEST nvmf_filesystem_no_in_capsule 00:05:57.891 ************************************ 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 ************************************ 00:05:57.891 START TEST nvmf_filesystem_in_capsule 00:05:57.891 ************************************ 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2129766 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2129766 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2129766 ']' 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.891 17:28:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 [2024-07-15 17:28:52.848835] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:05:57.891 [2024-07-15 17:28:52.848942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.891 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.891 [2024-07-15 17:28:52.918835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.149 [2024-07-15 17:28:53.039521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:58.149 [2024-07-15 17:28:53.039579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:58.149 [2024-07-15 17:28:53.039612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:58.149 [2024-07-15 17:28:53.039626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:58.149 [2024-07-15 17:28:53.039638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:58.149 [2024-07-15 17:28:53.039705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.149 [2024-07-15 17:28:53.039761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.149 [2024-07-15 17:28:53.039825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.149 [2024-07-15 17:28:53.039828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.713 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.714 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:58.714 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:58.714 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.714 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 [2024-07-15 17:28:53.873243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 Malloc1 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 [2024-07-15 17:28:54.059377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.971 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:58.971 { 00:05:58.971 "name": "Malloc1", 00:05:58.971 "aliases": [ 00:05:58.971 "43aa9e73-e9bb-42fa-abb6-a424747a622c" 00:05:58.971 ], 00:05:58.971 "product_name": "Malloc disk", 00:05:58.971 "block_size": 512, 00:05:58.971 "num_blocks": 1048576, 00:05:58.971 "uuid": "43aa9e73-e9bb-42fa-abb6-a424747a622c", 00:05:58.971 "assigned_rate_limits": { 00:05:58.971 "rw_ios_per_sec": 0, 00:05:58.971 "rw_mbytes_per_sec": 0, 00:05:58.971 "r_mbytes_per_sec": 0, 00:05:58.971 "w_mbytes_per_sec": 0 00:05:58.971 }, 00:05:58.971 "claimed": true, 00:05:58.971 "claim_type": "exclusive_write", 00:05:58.971 "zoned": false, 00:05:58.971 "supported_io_types": { 00:05:58.971 "read": true, 00:05:58.971 "write": true, 00:05:58.971 "unmap": true, 00:05:58.971 "flush": true, 00:05:58.971 "reset": true, 00:05:58.971 "nvme_admin": false, 00:05:58.971 "nvme_io": false, 00:05:58.971 "nvme_io_md": false, 00:05:58.971 "write_zeroes": true, 00:05:58.971 "zcopy": true, 00:05:58.971 "get_zone_info": false, 00:05:58.971 "zone_management": false, 00:05:58.971 "zone_append": false, 00:05:58.971 "compare": false, 00:05:58.971 "compare_and_write": false, 00:05:58.971 "abort": true, 00:05:58.971 "seek_hole": false, 00:05:58.971 "seek_data": false, 00:05:58.971 "copy": true, 00:05:58.971 "nvme_iov_md": false 00:05:58.971 }, 00:05:58.971 "memory_domains": [ 00:05:58.971 { 00:05:58.971 "dma_device_id": "system", 00:05:58.971 "dma_device_type": 1 00:05:58.971 }, 00:05:58.971 { 00:05:58.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.971 "dma_device_type": 2 00:05:58.971 } 00:05:58.971 ], 00:05:58.971 "driver_specific": {} 00:05:58.971 } 00:05:58.971 ]' 00:05:58.972 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:59.238 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:59.805 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:59.805 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:59.805 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:59.805 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:59.805 17:28:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:02.327 17:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:02.327 17:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:02.891 17:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:03.823 ************************************ 00:06:03.823 START TEST filesystem_in_capsule_ext4 00:06:03.823 ************************************ 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:03.823 17:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:03.824 mke2fs 1.46.5 (30-Dec-2021) 00:06:04.081 Discarding device blocks: 0/522240 done 00:06:04.081 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:04.081 Filesystem UUID: a1974659-0ac5-4de7-bfee-ede0e9078564 00:06:04.081 Superblock backups stored on blocks: 00:06:04.081 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:04.081 00:06:04.081 Allocating group tables: 0/64 done 00:06:04.081 Writing inode tables: 0/64 done 00:06:04.081 Creating journal (8192 blocks): done 00:06:04.081 Writing superblocks and filesystem accounting information: 0/64 done 00:06:04.081 00:06:04.081 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:04.081 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2129766 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:04.339 00:06:04.339 real 0m0.530s 00:06:04.339 user 0m0.016s 00:06:04.339 sys 0m0.051s 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:04.339 ************************************ 00:06:04.339 END TEST filesystem_in_capsule_ext4 00:06:04.339 ************************************ 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.339 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.596 ************************************ 00:06:04.596 START TEST filesystem_in_capsule_btrfs 00:06:04.596 ************************************ 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:04.596 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:04.853 btrfs-progs v6.6.2 00:06:04.853 See https://btrfs.readthedocs.io for more information. 00:06:04.853 00:06:04.853 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:04.853 NOTE: several default settings have changed in version 5.15, please make sure 00:06:04.853 this does not affect your deployments: 00:06:04.853 - DUP for metadata (-m dup) 00:06:04.853 - enabled no-holes (-O no-holes) 00:06:04.853 - enabled free-space-tree (-R free-space-tree) 00:06:04.853 00:06:04.853 Label: (null) 00:06:04.853 UUID: 4283a5cc-2301-4d29-952b-d12d471b44a8 00:06:04.853 Node size: 16384 00:06:04.853 Sector size: 4096 00:06:04.853 Filesystem size: 510.00MiB 00:06:04.853 Block group profiles: 00:06:04.853 Data: single 8.00MiB 00:06:04.853 Metadata: DUP 32.00MiB 00:06:04.853 System: DUP 8.00MiB 00:06:04.853 SSD detected: yes 00:06:04.853 Zoned device: no 00:06:04.853 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:04.853 Runtime features: free-space-tree 00:06:04.853 Checksum: crc32c 00:06:04.853 Number of devices: 1 00:06:04.853 Devices: 00:06:04.853 ID SIZE PATH 00:06:04.853 1 510.00MiB /dev/nvme0n1p1 00:06:04.853 00:06:04.853 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:04.853 17:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2129766 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:05.417 00:06:05.417 real 0m0.832s 00:06:05.417 user 0m0.030s 00:06:05.417 sys 0m0.100s 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:05.417 ************************************ 00:06:05.417 END TEST filesystem_in_capsule_btrfs 00:06:05.417 ************************************ 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.417 ************************************ 00:06:05.417 START TEST filesystem_in_capsule_xfs 00:06:05.417 ************************************ 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:05.417 17:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:05.417 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:05.417 = sectsz=512 attr=2, projid32bit=1 00:06:05.417 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:05.417 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:05.417 data = bsize=4096 blocks=130560, imaxpct=25 00:06:05.417 = sunit=0 swidth=0 blks 00:06:05.417 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:05.417 log =internal log bsize=4096 blocks=16384, version=2 00:06:05.417 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:05.417 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:06.348 Discarding blocks...Done. 00:06:06.348 17:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:06.348 17:29:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2129766 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.870 00:06:08.870 real 0m3.457s 00:06:08.870 user 0m0.017s 00:06:08.870 sys 0m0.061s 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:08.870 ************************************ 00:06:08.870 END TEST filesystem_in_capsule_xfs 00:06:08.870 ************************************ 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:08.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:08.870 17:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2129766 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2129766 ']' 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2129766 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2129766 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2129766' 00:06:09.128 killing process with pid 2129766 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2129766 00:06:09.128 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2129766 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:09.696 00:06:09.696 real 0m11.741s 00:06:09.696 user 0m45.107s 00:06:09.696 sys 0m1.759s 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.696 ************************************ 00:06:09.696 END TEST nvmf_filesystem_in_capsule 00:06:09.696 ************************************ 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:09.696 rmmod nvme_tcp 00:06:09.696 rmmod nvme_fabrics 00:06:09.696 rmmod nvme_keyring 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.696 17:29:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.601 17:29:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:11.601 00:06:11.601 real 0m30.170s 00:06:11.601 user 1m39.324s 00:06:11.601 sys 0m5.359s 00:06:11.601 17:29:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.601 17:29:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.601 ************************************ 00:06:11.601 END TEST nvmf_filesystem 00:06:11.601 ************************************ 00:06:11.601 17:29:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:11.601 17:29:06 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:11.601 17:29:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:11.601 17:29:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.601 17:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.601 ************************************ 00:06:11.601 START TEST nvmf_target_discovery 00:06:11.601 ************************************ 00:06:11.601 17:29:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:11.860 * Looking for test storage... 00:06:11.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.860 17:29:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:13.791 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:13.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:13.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:13.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:13.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.792 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:14.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:06:14.050 00:06:14.050 --- 10.0.0.2 ping statistics --- 00:06:14.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.050 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:06:14.050 00:06:14.050 --- 10.0.0.1 ping statistics --- 00:06:14.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.050 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2133270 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2133270 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2133270 ']' 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.050 17:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:14.050 [2024-07-15 17:29:08.996873] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:14.050 [2024-07-15 17:29:08.996961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.050 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.050 [2024-07-15 17:29:09.069934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.308 [2024-07-15 17:29:09.192076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.308 [2024-07-15 17:29:09.192131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.309 [2024-07-15 17:29:09.192148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.309 [2024-07-15 17:29:09.192170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.309 [2024-07-15 17:29:09.192191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.309 [2024-07-15 17:29:09.192250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.309 [2024-07-15 17:29:09.192308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.309 [2024-07-15 17:29:09.192362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.309 [2024-07-15 17:29:09.192365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.874 17:29:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.874 17:29:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:14.874 17:29:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:14.874 17:29:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.874 17:29:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:14.874 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.874 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:14.874 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.874 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:14.874 [2024-07-15 17:29:10.009980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 Null1 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 [2024-07-15 17:29:10.050275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 Null2 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 Null3 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 Null4 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.133 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:15.391 00:06:15.391 Discovery Log Number of Records 6, Generation counter 6 00:06:15.391 =====Discovery Log Entry 0====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: current discovery subsystem 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4420 00:06:15.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: explicit discovery connections, duplicate discovery information 00:06:15.391 sectype: none 00:06:15.391 =====Discovery Log Entry 1====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: nvme subsystem 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4420 00:06:15.391 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: none 00:06:15.391 sectype: none 00:06:15.391 =====Discovery Log Entry 2====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: nvme subsystem 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4420 00:06:15.391 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: none 00:06:15.391 sectype: none 00:06:15.391 =====Discovery Log Entry 3====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: nvme subsystem 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4420 00:06:15.391 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: none 00:06:15.391 sectype: none 00:06:15.391 =====Discovery Log Entry 4====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: nvme subsystem 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4420 00:06:15.391 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: none 00:06:15.391 sectype: none 00:06:15.391 =====Discovery Log Entry 5====== 00:06:15.391 trtype: tcp 00:06:15.391 adrfam: ipv4 00:06:15.391 subtype: discovery subsystem referral 00:06:15.391 treq: not required 00:06:15.391 portid: 0 00:06:15.391 trsvcid: 4430 00:06:15.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:15.391 traddr: 10.0.0.2 00:06:15.391 eflags: none 00:06:15.391 sectype: none 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:15.392 Perform nvmf subsystem discovery via RPC 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 [ 00:06:15.392 { 00:06:15.392 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:15.392 "subtype": "Discovery", 00:06:15.392 "listen_addresses": [ 00:06:15.392 { 00:06:15.392 "trtype": "TCP", 00:06:15.392 "adrfam": "IPv4", 00:06:15.392 "traddr": "10.0.0.2", 00:06:15.392 "trsvcid": "4420" 00:06:15.392 } 00:06:15.392 ], 00:06:15.392 "allow_any_host": true, 00:06:15.392 "hosts": [] 00:06:15.392 }, 00:06:15.392 { 00:06:15.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:15.392 "subtype": "NVMe", 00:06:15.392 "listen_addresses": [ 00:06:15.392 { 00:06:15.392 "trtype": "TCP", 00:06:15.392 "adrfam": "IPv4", 00:06:15.392 "traddr": "10.0.0.2", 00:06:15.392 "trsvcid": "4420" 00:06:15.392 } 00:06:15.392 ], 00:06:15.392 "allow_any_host": true, 00:06:15.392 "hosts": [], 00:06:15.392 "serial_number": "SPDK00000000000001", 00:06:15.392 "model_number": "SPDK bdev Controller", 00:06:15.392 "max_namespaces": 32, 00:06:15.392 "min_cntlid": 1, 00:06:15.392 "max_cntlid": 65519, 00:06:15.392 "namespaces": [ 00:06:15.392 { 00:06:15.392 "nsid": 1, 00:06:15.392 "bdev_name": "Null1", 00:06:15.392 "name": "Null1", 00:06:15.392 "nguid": "8594B6D03C45455495D1588DE89DB87D", 00:06:15.392 "uuid": "8594b6d0-3c45-4554-95d1-588de89db87d" 00:06:15.392 } 00:06:15.392 ] 00:06:15.392 }, 00:06:15.392 { 00:06:15.392 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:15.392 "subtype": "NVMe", 00:06:15.392 "listen_addresses": [ 00:06:15.392 { 00:06:15.392 "trtype": "TCP", 00:06:15.392 "adrfam": "IPv4", 00:06:15.392 "traddr": "10.0.0.2", 00:06:15.392 "trsvcid": "4420" 00:06:15.392 } 00:06:15.392 ], 00:06:15.392 "allow_any_host": true, 00:06:15.392 "hosts": [], 00:06:15.392 "serial_number": "SPDK00000000000002", 00:06:15.392 "model_number": "SPDK bdev Controller", 00:06:15.392 "max_namespaces": 32, 00:06:15.392 "min_cntlid": 1, 00:06:15.392 "max_cntlid": 65519, 00:06:15.392 "namespaces": [ 00:06:15.392 { 00:06:15.392 "nsid": 1, 00:06:15.392 "bdev_name": "Null2", 00:06:15.392 "name": "Null2", 00:06:15.392 "nguid": "CF92ED84019C4D869AB5D65006603F37", 00:06:15.392 "uuid": "cf92ed84-019c-4d86-9ab5-d65006603f37" 00:06:15.392 } 00:06:15.392 ] 00:06:15.392 }, 00:06:15.392 { 00:06:15.392 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:15.392 "subtype": "NVMe", 00:06:15.392 "listen_addresses": [ 00:06:15.392 { 00:06:15.392 "trtype": "TCP", 00:06:15.392 "adrfam": "IPv4", 00:06:15.392 "traddr": "10.0.0.2", 00:06:15.392 "trsvcid": "4420" 00:06:15.392 } 00:06:15.392 ], 00:06:15.392 "allow_any_host": true, 00:06:15.392 "hosts": [], 00:06:15.392 "serial_number": "SPDK00000000000003", 00:06:15.392 "model_number": "SPDK bdev Controller", 00:06:15.392 "max_namespaces": 32, 00:06:15.392 "min_cntlid": 1, 00:06:15.392 "max_cntlid": 65519, 00:06:15.392 "namespaces": [ 00:06:15.392 { 00:06:15.392 "nsid": 1, 00:06:15.392 "bdev_name": "Null3", 00:06:15.392 "name": "Null3", 00:06:15.392 "nguid": "B0AD335DEE844FC2A959446216F5B53C", 00:06:15.392 "uuid": "b0ad335d-ee84-4fc2-a959-446216f5b53c" 00:06:15.392 } 00:06:15.392 ] 00:06:15.392 }, 00:06:15.392 { 00:06:15.392 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:15.392 "subtype": "NVMe", 00:06:15.392 "listen_addresses": [ 00:06:15.392 { 00:06:15.392 "trtype": "TCP", 00:06:15.392 "adrfam": "IPv4", 00:06:15.392 "traddr": "10.0.0.2", 00:06:15.392 "trsvcid": "4420" 00:06:15.392 } 00:06:15.392 ], 00:06:15.392 "allow_any_host": true, 00:06:15.392 "hosts": [], 00:06:15.392 "serial_number": "SPDK00000000000004", 00:06:15.392 "model_number": "SPDK bdev Controller", 00:06:15.392 "max_namespaces": 32, 00:06:15.392 "min_cntlid": 1, 00:06:15.392 "max_cntlid": 65519, 00:06:15.392 "namespaces": [ 00:06:15.392 { 00:06:15.392 "nsid": 1, 00:06:15.392 "bdev_name": "Null4", 00:06:15.392 "name": "Null4", 00:06:15.392 "nguid": "C659B781AE594455AEA381359A3CFA22", 00:06:15.392 "uuid": "c659b781-ae59-4455-aea3-81359a3cfa22" 00:06:15.392 } 00:06:15.392 ] 00:06:15.392 } 00:06:15.392 ] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:15.392 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:15.392 rmmod nvme_tcp 00:06:15.392 rmmod nvme_fabrics 00:06:15.392 rmmod nvme_keyring 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2133270 ']' 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2133270 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2133270 ']' 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2133270 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2133270 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2133270' 00:06:15.651 killing process with pid 2133270 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2133270 00:06:15.651 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2133270 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.911 17:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.816 17:29:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.816 00:06:17.816 real 0m6.202s 00:06:17.816 user 0m7.523s 00:06:17.816 sys 0m1.868s 00:06:17.816 17:29:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.816 17:29:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:17.816 ************************************ 00:06:17.816 END TEST nvmf_target_discovery 00:06:17.816 ************************************ 00:06:17.816 17:29:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:17.816 17:29:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:17.816 17:29:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.816 17:29:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.816 17:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.816 ************************************ 00:06:17.816 START TEST nvmf_referrals 00:06:17.816 ************************************ 00:06:17.816 17:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:18.074 * Looking for test storage... 00:06:18.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.074 17:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.075 17:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.075 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:18.075 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:18.075 17:29:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:18.075 17:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:19.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:19.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:19.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.995 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:19.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:19.996 17:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:19.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:06:19.996 00:06:19.996 --- 10.0.0.2 ping statistics --- 00:06:19.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.996 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:06:19.996 00:06:19.996 --- 10.0.0.1 ping statistics --- 00:06:19.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.996 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2135466 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2135466 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2135466 ']' 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.996 17:29:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 [2024-07-15 17:29:15.114185] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:19.996 [2024-07-15 17:29:15.114287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.255 [2024-07-15 17:29:15.177887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.255 [2024-07-15 17:29:15.297218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.255 [2024-07-15 17:29:15.297287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.255 [2024-07-15 17:29:15.297303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.255 [2024-07-15 17:29:15.297316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.255 [2024-07-15 17:29:15.297327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.255 [2024-07-15 17:29:15.297389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.255 [2024-07-15 17:29:15.297446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.255 [2024-07-15 17:29:15.297498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.255 [2024-07-15 17:29:15.297501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 [2024-07-15 17:29:16.120192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 [2024-07-15 17:29:16.132384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:21.188 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:21.189 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:21.446 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:21.447 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:21.447 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:21.447 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:21.704 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:21.705 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:21.962 17:29:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.220 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:22.479 rmmod nvme_tcp 00:06:22.479 rmmod nvme_fabrics 00:06:22.479 rmmod nvme_keyring 00:06:22.479 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2135466 ']' 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2135466 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2135466 ']' 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2135466 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2135466 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2135466' 00:06:22.738 killing process with pid 2135466 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2135466 00:06:22.738 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2135466 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:22.997 17:29:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.905 17:29:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:24.905 00:06:24.905 real 0m7.027s 00:06:24.905 user 0m11.917s 00:06:24.905 sys 0m2.080s 00:06:24.905 17:29:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.905 17:29:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:24.905 ************************************ 00:06:24.905 END TEST nvmf_referrals 00:06:24.905 ************************************ 00:06:24.905 17:29:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:24.905 17:29:19 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:24.905 17:29:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.905 17:29:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.905 17:29:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.905 ************************************ 00:06:24.905 START TEST nvmf_connect_disconnect 00:06:24.905 ************************************ 00:06:24.905 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:25.164 * Looking for test storage... 00:06:25.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:25.164 17:29:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:27.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:27.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:27.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:27.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.114 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:27.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:06:27.372 00:06:27.372 --- 10.0.0.2 ping statistics --- 00:06:27.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.372 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:06:27.372 00:06:27.372 --- 10.0.0.1 ping statistics --- 00:06:27.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.372 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2137769 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2137769 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2137769 ']' 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.372 17:29:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:27.372 [2024-07-15 17:29:22.351790] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:27.372 [2024-07-15 17:29:22.351899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.372 [2024-07-15 17:29:22.414986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.629 [2024-07-15 17:29:22.534732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.629 [2024-07-15 17:29:22.534783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.629 [2024-07-15 17:29:22.534810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.629 [2024-07-15 17:29:22.534824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.629 [2024-07-15 17:29:22.534835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.629 [2024-07-15 17:29:22.534914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.629 [2024-07-15 17:29:22.534969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.629 [2024-07-15 17:29:22.535019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.629 [2024-07-15 17:29:22.535022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 [2024-07-15 17:29:23.315940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.195 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.493 [2024-07-15 17:29:23.377136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:28.493 17:29:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:31.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:34.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:39.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.894 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.894 rmmod nvme_tcp 00:06:41.894 rmmod nvme_fabrics 00:06:42.151 rmmod nvme_keyring 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2137769 ']' 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2137769 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2137769 ']' 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2137769 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2137769 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2137769' 00:06:42.151 killing process with pid 2137769 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2137769 00:06:42.151 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2137769 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.408 17:29:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.305 17:29:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:44.305 00:06:44.305 real 0m19.410s 00:06:44.305 user 0m59.006s 00:06:44.305 sys 0m3.243s 00:06:44.305 17:29:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.305 17:29:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:44.305 ************************************ 00:06:44.305 END TEST nvmf_connect_disconnect 00:06:44.305 ************************************ 00:06:44.562 17:29:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:44.562 17:29:39 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:44.562 17:29:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.562 17:29:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.562 17:29:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.562 ************************************ 00:06:44.562 START TEST nvmf_multitarget 00:06:44.562 ************************************ 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:44.562 * Looking for test storage... 00:06:44.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.562 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.563 17:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.096 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:06:47.097 00:06:47.097 --- 10.0.0.2 ping statistics --- 00:06:47.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.097 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:06:47.097 00:06:47.097 --- 10.0.0.1 ping statistics --- 00:06:47.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.097 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2141548 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2141548 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2141548 ']' 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.097 17:29:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.097 [2024-07-15 17:29:41.836174] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:47.097 [2024-07-15 17:29:41.836262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.097 [2024-07-15 17:29:41.898820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.097 [2024-07-15 17:29:42.007019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.097 [2024-07-15 17:29:42.007072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.097 [2024-07-15 17:29:42.007086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.097 [2024-07-15 17:29:42.007098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.097 [2024-07-15 17:29:42.007108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.097 [2024-07-15 17:29:42.007188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.097 [2024-07-15 17:29:42.007271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.097 [2024-07-15 17:29:42.007330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.097 [2024-07-15 17:29:42.007327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:47.097 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:47.355 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:47.355 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:47.355 "nvmf_tgt_1" 00:06:47.355 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:47.614 "nvmf_tgt_2" 00:06:47.614 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:47.614 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:47.614 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:47.614 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:47.614 true 00:06:47.614 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:47.872 true 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.872 17:29:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.872 rmmod nvme_tcp 00:06:47.872 rmmod nvme_fabrics 00:06:47.872 rmmod nvme_keyring 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2141548 ']' 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2141548 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2141548 ']' 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2141548 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2141548 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2141548' 00:06:48.130 killing process with pid 2141548 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2141548 00:06:48.130 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2141548 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.400 17:29:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.336 17:29:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.336 00:06:50.336 real 0m5.878s 00:06:50.336 user 0m6.637s 00:06:50.336 sys 0m1.953s 00:06:50.336 17:29:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.336 17:29:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 END TEST nvmf_multitarget 00:06:50.336 ************************************ 00:06:50.336 17:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:50.336 17:29:45 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:50.336 17:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.336 17:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.336 17:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.336 ************************************ 00:06:50.336 START TEST nvmf_rpc 00:06:50.336 ************************************ 00:06:50.336 17:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:50.336 * Looking for test storage... 00:06:50.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.336 17:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.336 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:50.336 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.337 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.596 17:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.500 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:52.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:52.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:52.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:52.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.501 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:06:52.760 00:06:52.760 --- 10.0.0.2 ping statistics --- 00:06:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.760 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:06:52.760 00:06:52.760 --- 10.0.0.1 ping statistics --- 00:06:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.760 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2143700 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2143700 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2143700 ']' 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.760 17:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 [2024-07-15 17:29:47.787234] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:06:52.760 [2024-07-15 17:29:47.787323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.760 [2024-07-15 17:29:47.861798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.019 [2024-07-15 17:29:47.983395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.019 [2024-07-15 17:29:47.983460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.019 [2024-07-15 17:29:47.983477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.019 [2024-07-15 17:29:47.983490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.019 [2024-07-15 17:29:47.983502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.019 [2024-07-15 17:29:47.983583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.019 [2024-07-15 17:29:47.983635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.019 [2024-07-15 17:29:47.983688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.019 [2024-07-15 17:29:47.983691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.591 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.591 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.591 17:29:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:53.591 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.591 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:53.850 "tick_rate": 2700000000, 00:06:53.850 "poll_groups": [ 00:06:53.850 { 00:06:53.850 "name": "nvmf_tgt_poll_group_000", 00:06:53.850 "admin_qpairs": 0, 00:06:53.850 "io_qpairs": 0, 00:06:53.850 "current_admin_qpairs": 0, 00:06:53.850 "current_io_qpairs": 0, 00:06:53.850 "pending_bdev_io": 0, 00:06:53.850 "completed_nvme_io": 0, 00:06:53.850 "transports": [] 00:06:53.850 }, 00:06:53.850 { 00:06:53.850 "name": "nvmf_tgt_poll_group_001", 00:06:53.850 "admin_qpairs": 0, 00:06:53.850 "io_qpairs": 0, 00:06:53.850 "current_admin_qpairs": 0, 00:06:53.850 "current_io_qpairs": 0, 00:06:53.850 "pending_bdev_io": 0, 00:06:53.850 "completed_nvme_io": 0, 00:06:53.850 "transports": [] 00:06:53.850 }, 00:06:53.850 { 00:06:53.850 "name": "nvmf_tgt_poll_group_002", 00:06:53.850 "admin_qpairs": 0, 00:06:53.850 "io_qpairs": 0, 00:06:53.850 "current_admin_qpairs": 0, 00:06:53.850 "current_io_qpairs": 0, 00:06:53.850 "pending_bdev_io": 0, 00:06:53.850 "completed_nvme_io": 0, 00:06:53.850 "transports": [] 00:06:53.850 }, 00:06:53.850 { 00:06:53.850 "name": "nvmf_tgt_poll_group_003", 00:06:53.850 "admin_qpairs": 0, 00:06:53.850 "io_qpairs": 0, 00:06:53.850 "current_admin_qpairs": 0, 00:06:53.850 "current_io_qpairs": 0, 00:06:53.850 "pending_bdev_io": 0, 00:06:53.850 "completed_nvme_io": 0, 00:06:53.850 "transports": [] 00:06:53.850 } 00:06:53.850 ] 00:06:53.850 }' 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:53.850 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.851 [2024-07-15 17:29:48.840263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:53.851 "tick_rate": 2700000000, 00:06:53.851 "poll_groups": [ 00:06:53.851 { 00:06:53.851 "name": "nvmf_tgt_poll_group_000", 00:06:53.851 "admin_qpairs": 0, 00:06:53.851 "io_qpairs": 0, 00:06:53.851 "current_admin_qpairs": 0, 00:06:53.851 "current_io_qpairs": 0, 00:06:53.851 "pending_bdev_io": 0, 00:06:53.851 "completed_nvme_io": 0, 00:06:53.851 "transports": [ 00:06:53.851 { 00:06:53.851 "trtype": "TCP" 00:06:53.851 } 00:06:53.851 ] 00:06:53.851 }, 00:06:53.851 { 00:06:53.851 "name": "nvmf_tgt_poll_group_001", 00:06:53.851 "admin_qpairs": 0, 00:06:53.851 "io_qpairs": 0, 00:06:53.851 "current_admin_qpairs": 0, 00:06:53.851 "current_io_qpairs": 0, 00:06:53.851 "pending_bdev_io": 0, 00:06:53.851 "completed_nvme_io": 0, 00:06:53.851 "transports": [ 00:06:53.851 { 00:06:53.851 "trtype": "TCP" 00:06:53.851 } 00:06:53.851 ] 00:06:53.851 }, 00:06:53.851 { 00:06:53.851 "name": "nvmf_tgt_poll_group_002", 00:06:53.851 "admin_qpairs": 0, 00:06:53.851 "io_qpairs": 0, 00:06:53.851 "current_admin_qpairs": 0, 00:06:53.851 "current_io_qpairs": 0, 00:06:53.851 "pending_bdev_io": 0, 00:06:53.851 "completed_nvme_io": 0, 00:06:53.851 "transports": [ 00:06:53.851 { 00:06:53.851 "trtype": "TCP" 00:06:53.851 } 00:06:53.851 ] 00:06:53.851 }, 00:06:53.851 { 00:06:53.851 "name": "nvmf_tgt_poll_group_003", 00:06:53.851 "admin_qpairs": 0, 00:06:53.851 "io_qpairs": 0, 00:06:53.851 "current_admin_qpairs": 0, 00:06:53.851 "current_io_qpairs": 0, 00:06:53.851 "pending_bdev_io": 0, 00:06:53.851 "completed_nvme_io": 0, 00:06:53.851 "transports": [ 00:06:53.851 { 00:06:53.851 "trtype": "TCP" 00:06:53.851 } 00:06:53.851 ] 00:06:53.851 } 00:06:53.851 ] 00:06:53.851 }' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.851 Malloc1 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.851 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 [2024-07-15 17:29:48.993688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:54.111 17:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:54.111 [2024-07-15 17:29:49.016144] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:54.111 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:54.111 could not add new controller: failed to write to nvme-fabrics device 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.111 17:29:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.682 17:29:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.682 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:54.682 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.682 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:54.682 17:29:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:57.218 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:57.218 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:57.218 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:57.218 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:57.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.219 [2024-07-15 17:29:51.840134] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:57.219 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:57.219 could not add new controller: failed to write to nvme-fabrics device 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.219 17:29:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.480 17:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.480 17:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:57.480 17:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.480 17:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:57.480 17:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:59.385 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 [2024-07-15 17:29:54.624545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.646 17:29:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.216 17:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.216 17:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:00.216 17:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.216 17:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:00.216 17:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:02.124 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:02.124 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:02.124 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.124 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 [2024-07-15 17:29:57.395464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.384 17:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.321 17:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.321 17:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:03.321 17:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.321 17:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:03.321 17:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 [2024-07-15 17:30:00.212820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.227 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.815 17:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.815 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.815 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.815 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.815 17:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.716 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 [2024-07-15 17:30:02.949365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.976 17:30:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.543 17:30:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.543 17:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.543 17:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.543 17:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.543 17:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.444 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.444 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.444 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.703 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 [2024-07-15 17:30:05.720679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.704 17:30:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.270 17:30:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.270 17:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.270 17:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.270 17:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.270 17:30:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.806 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 [2024-07-15 17:30:08.468022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 [2024-07-15 17:30:08.516065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 [2024-07-15 17:30:08.564258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 [2024-07-15 17:30:08.612412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 [2024-07-15 17:30:08.660558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.807 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:13.808 "tick_rate": 2700000000, 00:07:13.808 "poll_groups": [ 00:07:13.808 { 00:07:13.808 "name": "nvmf_tgt_poll_group_000", 00:07:13.808 "admin_qpairs": 2, 00:07:13.808 "io_qpairs": 84, 00:07:13.808 "current_admin_qpairs": 0, 00:07:13.808 "current_io_qpairs": 0, 00:07:13.808 "pending_bdev_io": 0, 00:07:13.808 "completed_nvme_io": 135, 00:07:13.808 "transports": [ 00:07:13.808 { 00:07:13.808 "trtype": "TCP" 00:07:13.808 } 00:07:13.808 ] 00:07:13.808 }, 00:07:13.808 { 00:07:13.808 "name": "nvmf_tgt_poll_group_001", 00:07:13.808 "admin_qpairs": 2, 00:07:13.808 "io_qpairs": 84, 00:07:13.808 "current_admin_qpairs": 0, 00:07:13.808 "current_io_qpairs": 0, 00:07:13.808 "pending_bdev_io": 0, 00:07:13.808 "completed_nvme_io": 233, 00:07:13.808 "transports": [ 00:07:13.808 { 00:07:13.808 "trtype": "TCP" 00:07:13.808 } 00:07:13.808 ] 00:07:13.808 }, 00:07:13.808 { 00:07:13.808 "name": "nvmf_tgt_poll_group_002", 00:07:13.808 "admin_qpairs": 1, 00:07:13.808 "io_qpairs": 84, 00:07:13.808 "current_admin_qpairs": 0, 00:07:13.808 "current_io_qpairs": 0, 00:07:13.808 "pending_bdev_io": 0, 00:07:13.808 "completed_nvme_io": 126, 00:07:13.808 "transports": [ 00:07:13.808 { 00:07:13.808 "trtype": "TCP" 00:07:13.808 } 00:07:13.808 ] 00:07:13.808 }, 00:07:13.808 { 00:07:13.808 "name": "nvmf_tgt_poll_group_003", 00:07:13.808 "admin_qpairs": 2, 00:07:13.808 "io_qpairs": 84, 00:07:13.808 "current_admin_qpairs": 0, 00:07:13.808 "current_io_qpairs": 0, 00:07:13.808 "pending_bdev_io": 0, 00:07:13.808 "completed_nvme_io": 192, 00:07:13.808 "transports": [ 00:07:13.808 { 00:07:13.808 "trtype": "TCP" 00:07:13.808 } 00:07:13.808 ] 00:07:13.808 } 00:07:13.808 ] 00:07:13.808 }' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.808 rmmod nvme_tcp 00:07:13.808 rmmod nvme_fabrics 00:07:13.808 rmmod nvme_keyring 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2143700 ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2143700 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2143700 ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2143700 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2143700 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2143700' 00:07:13.808 killing process with pid 2143700 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2143700 00:07:13.808 17:30:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2143700 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.066 17:30:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.605 17:30:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:16.605 00:07:16.605 real 0m25.786s 00:07:16.605 user 1m23.973s 00:07:16.605 sys 0m4.156s 00:07:16.605 17:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.605 17:30:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.605 ************************************ 00:07:16.605 END TEST nvmf_rpc 00:07:16.605 ************************************ 00:07:16.605 17:30:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:16.605 17:30:11 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.605 17:30:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:16.605 17:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.605 17:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.605 ************************************ 00:07:16.605 START TEST nvmf_invalid 00:07:16.605 ************************************ 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.605 * Looking for test storage... 00:07:16.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:16.605 17:30:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:16.606 17:30:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.524 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.524 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.524 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.524 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:18.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:18.525 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:18.525 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:18.525 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:07:18.525 00:07:18.525 --- 10.0.0.2 ping statistics --- 00:07:18.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.525 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:07:18.525 00:07:18.525 --- 10.0.0.1 ping statistics --- 00:07:18.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.525 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2148902 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2148902 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2148902 ']' 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.525 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.526 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.526 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.526 [2024-07-15 17:30:13.590681] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:18.526 [2024-07-15 17:30:13.590763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.526 [2024-07-15 17:30:13.655455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.784 [2024-07-15 17:30:13.763657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.785 [2024-07-15 17:30:13.763709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.785 [2024-07-15 17:30:13.763722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.785 [2024-07-15 17:30:13.763732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.785 [2024-07-15 17:30:13.763741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.785 [2024-07-15 17:30:13.763820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.785 [2024-07-15 17:30:13.763894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.785 [2024-07-15 17:30:13.763950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.785 [2024-07-15 17:30:13.763954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:18.785 17:30:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20675 00:07:19.042 [2024-07-15 17:30:14.177401] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:19.302 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:19.302 { 00:07:19.302 "nqn": "nqn.2016-06.io.spdk:cnode20675", 00:07:19.302 "tgt_name": "foobar", 00:07:19.302 "method": "nvmf_create_subsystem", 00:07:19.302 "req_id": 1 00:07:19.302 } 00:07:19.302 Got JSON-RPC error response 00:07:19.302 response: 00:07:19.302 { 00:07:19.302 "code": -32603, 00:07:19.302 "message": "Unable to find target foobar" 00:07:19.302 }' 00:07:19.302 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:19.302 { 00:07:19.302 "nqn": "nqn.2016-06.io.spdk:cnode20675", 00:07:19.302 "tgt_name": "foobar", 00:07:19.302 "method": "nvmf_create_subsystem", 00:07:19.302 "req_id": 1 00:07:19.302 } 00:07:19.302 Got JSON-RPC error response 00:07:19.302 response: 00:07:19.302 { 00:07:19.302 "code": -32603, 00:07:19.302 "message": "Unable to find target foobar" 00:07:19.302 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:19.302 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:19.302 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28559 00:07:19.302 [2024-07-15 17:30:14.422238] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28559: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:19.561 { 00:07:19.561 "nqn": "nqn.2016-06.io.spdk:cnode28559", 00:07:19.561 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:19.561 "method": "nvmf_create_subsystem", 00:07:19.561 "req_id": 1 00:07:19.561 } 00:07:19.561 Got JSON-RPC error response 00:07:19.561 response: 00:07:19.561 { 00:07:19.561 "code": -32602, 00:07:19.561 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:19.561 }' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:19.561 { 00:07:19.561 "nqn": "nqn.2016-06.io.spdk:cnode28559", 00:07:19.561 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:19.561 "method": "nvmf_create_subsystem", 00:07:19.561 "req_id": 1 00:07:19.561 } 00:07:19.561 Got JSON-RPC error response 00:07:19.561 response: 00:07:19.561 { 00:07:19.561 "code": -32602, 00:07:19.561 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:19.561 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15260 00:07:19.561 [2024-07-15 17:30:14.667013] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15260: invalid model number 'SPDK_Controller' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:19.561 { 00:07:19.561 "nqn": "nqn.2016-06.io.spdk:cnode15260", 00:07:19.561 "model_number": "SPDK_Controller\u001f", 00:07:19.561 "method": "nvmf_create_subsystem", 00:07:19.561 "req_id": 1 00:07:19.561 } 00:07:19.561 Got JSON-RPC error response 00:07:19.561 response: 00:07:19.561 { 00:07:19.561 "code": -32602, 00:07:19.561 "message": "Invalid MN SPDK_Controller\u001f" 00:07:19.561 }' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:19.561 { 00:07:19.561 "nqn": "nqn.2016-06.io.spdk:cnode15260", 00:07:19.561 "model_number": "SPDK_Controller\u001f", 00:07:19.561 "method": "nvmf_create_subsystem", 00:07:19.561 "req_id": 1 00:07:19.561 } 00:07:19.561 Got JSON-RPC error response 00:07:19.561 response: 00:07:19.561 { 00:07:19.561 "code": -32602, 00:07:19.561 "message": "Invalid MN SPDK_Controller\u001f" 00:07:19.561 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.561 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:07:19.820 17:30:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '1EtooTC;i*WE /dev/null' 00:07:22.942 17:30:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.844 17:30:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:24.844 00:07:24.844 real 0m8.718s 00:07:24.844 user 0m20.071s 00:07:24.844 sys 0m2.472s 00:07:24.844 17:30:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.844 17:30:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:24.844 ************************************ 00:07:24.844 END TEST nvmf_invalid 00:07:24.844 ************************************ 00:07:25.104 17:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.104 17:30:19 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.104 17:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.104 17:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.104 17:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.104 ************************************ 00:07:25.104 START TEST nvmf_abort 00:07:25.104 ************************************ 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.104 * Looking for test storage... 00:07:25.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.104 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.105 17:30:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:27.011 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:27.011 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.011 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:27.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:27.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.012 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:07:27.271 00:07:27.271 --- 10.0.0.2 ping statistics --- 00:07:27.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.271 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:07:27.271 00:07:27.271 --- 10.0.0.1 ping statistics --- 00:07:27.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.271 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2151538 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2151538 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2151538 ']' 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.271 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.271 [2024-07-15 17:30:22.272340] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:27.271 [2024-07-15 17:30:22.272438] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.271 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.271 [2024-07-15 17:30:22.341613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.534 [2024-07-15 17:30:22.463305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.534 [2024-07-15 17:30:22.463370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.534 [2024-07-15 17:30:22.463396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.534 [2024-07-15 17:30:22.463409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.534 [2024-07-15 17:30:22.463421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.534 [2024-07-15 17:30:22.463509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.534 [2024-07-15 17:30:22.463574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.534 [2024-07-15 17:30:22.463578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.534 [2024-07-15 17:30:22.603148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.534 Malloc0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.534 Delay0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.534 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.794 [2024-07-15 17:30:22.677035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.794 17:30:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:27.794 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.794 [2024-07-15 17:30:22.742555] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:29.699 Initializing NVMe Controllers 00:07:29.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:29.699 controller IO queue size 128 less than required 00:07:29.699 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:29.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:29.699 Initialization complete. Launching workers. 00:07:29.699 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33402 00:07:29.699 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33467, failed to submit 62 00:07:29.699 success 33406, unsuccess 61, failed 0 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.699 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.959 rmmod nvme_tcp 00:07:29.959 rmmod nvme_fabrics 00:07:29.959 rmmod nvme_keyring 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2151538 ']' 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2151538 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2151538 ']' 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2151538 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2151538 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2151538' 00:07:29.959 killing process with pid 2151538 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2151538 00:07:29.959 17:30:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2151538 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.218 17:30:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.125 17:30:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.407 00:07:32.407 real 0m7.257s 00:07:32.407 user 0m10.213s 00:07:32.407 sys 0m2.582s 00:07:32.407 17:30:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.407 17:30:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.407 ************************************ 00:07:32.407 END TEST nvmf_abort 00:07:32.407 ************************************ 00:07:32.407 17:30:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:32.407 17:30:27 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:32.407 17:30:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.407 17:30:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.407 17:30:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.407 ************************************ 00:07:32.407 START TEST nvmf_ns_hotplug_stress 00:07:32.407 ************************************ 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:32.407 * Looking for test storage... 00:07:32.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.407 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.408 17:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.316 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.316 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.316 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:34.317 00:07:34.317 --- 10.0.0.2 ping statistics --- 00:07:34.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.317 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:34.317 00:07:34.317 --- 10.0.0.1 ping statistics --- 00:07:34.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.317 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2153758 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2153758 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2153758 ']' 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.317 17:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.577 [2024-07-15 17:30:29.498595] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:07:34.577 [2024-07-15 17:30:29.498694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.577 [2024-07-15 17:30:29.568104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.577 [2024-07-15 17:30:29.688244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.577 [2024-07-15 17:30:29.688320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.577 [2024-07-15 17:30:29.688337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.577 [2024-07-15 17:30:29.688351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.577 [2024-07-15 17:30:29.688362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.577 [2024-07-15 17:30:29.688456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.577 [2024-07-15 17:30:29.688517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.577 [2024-07-15 17:30:29.688520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:35.512 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.770 [2024-07-15 17:30:30.675583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.770 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.029 17:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.287 [2024-07-15 17:30:31.174251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.287 17:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.546 17:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:36.804 Malloc0 00:07:36.804 17:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.064 Delay0 00:07:37.064 17:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.322 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:37.322 NULL1 00:07:37.581 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:37.581 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2154184 00:07:37.581 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:37.581 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:37.581 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.888 17:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.164 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:38.164 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:38.420 true 00:07:38.420 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:38.420 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.678 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.935 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:38.935 17:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:39.191 true 00:07:39.191 17:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:39.191 17:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.128 Read completed with error (sct=0, sc=11) 00:07:40.128 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.128 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:40.128 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:40.386 true 00:07:40.386 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:40.386 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.643 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.901 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:40.901 17:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:41.158 true 00:07:41.158 17:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:41.158 17:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 17:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.532 17:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:42.532 17:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:42.788 true 00:07:42.788 17:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:42.788 17:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.718 17:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.718 17:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:43.718 17:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:43.975 true 00:07:43.975 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:43.975 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.232 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.489 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:44.489 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:44.746 true 00:07:44.746 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:44.746 17:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.680 17:30:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.939 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:45.939 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.196 true 00:07:46.454 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:46.454 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.454 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.711 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:46.711 17:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:46.969 true 00:07:46.969 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:46.969 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.227 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.484 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:47.484 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:47.742 true 00:07:47.742 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:47.742 17:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.117 17:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.117 17:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:49.117 17:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:49.375 true 00:07:49.375 17:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:49.375 17:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.311 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.311 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:50.311 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:50.569 true 00:07:50.569 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:50.569 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.827 17:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.086 17:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:51.086 17:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:51.344 true 00:07:51.344 17:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:51.344 17:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.309 17:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.568 17:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:52.568 17:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:52.826 true 00:07:52.826 17:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:52.826 17:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.083 17:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.341 17:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:53.341 17:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:53.600 true 00:07:53.600 17:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:53.600 17:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.537 17:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.537 17:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:54.537 17:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:54.795 true 00:07:54.795 17:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:54.795 17:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.053 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.311 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:55.311 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:55.569 true 00:07:55.569 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:55.569 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.827 17:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.085 17:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:56.085 17:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:56.343 true 00:07:56.343 17:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:56.343 17:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.278 17:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.536 17:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:57.536 17:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:57.793 true 00:07:57.793 17:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:57.793 17:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.051 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.309 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:58.309 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:58.566 true 00:07:58.566 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:58.566 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.823 17:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.080 17:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:59.080 17:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:59.338 true 00:07:59.338 17:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:07:59.338 17:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.274 17:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.532 17:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:00.532 17:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:01.098 true 00:08:01.098 17:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:01.098 17:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.663 17:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.921 17:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:01.921 17:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:02.179 true 00:08:02.179 17:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:02.179 17:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.438 17:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.695 17:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:02.695 17:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:02.953 true 00:08:02.953 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:02.953 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.211 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.469 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:03.469 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:03.727 true 00:08:03.727 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:03.727 17:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.659 17:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.222 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:05.222 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:05.222 true 00:08:05.222 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:05.222 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.508 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.771 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:05.771 17:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:06.027 true 00:08:06.027 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:06.027 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.283 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.540 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:06.540 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:06.797 true 00:08:06.797 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:06.797 17:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.168 17:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.168 Initializing NVMe Controllers 00:08:08.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.168 Controller IO queue size 128, less than required. 00:08:08.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.168 Controller IO queue size 128, less than required. 00:08:08.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:08.168 Initialization complete. Launching workers. 00:08:08.168 ======================================================== 00:08:08.168 Latency(us) 00:08:08.168 Device Information : IOPS MiB/s Average min max 00:08:08.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1090.67 0.53 58033.10 2645.51 1036952.50 00:08:08.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10459.93 5.11 12200.55 2854.73 446309.53 00:08:08.168 ======================================================== 00:08:08.168 Total : 11550.60 5.64 16528.30 2645.51 1036952.50 00:08:08.168 00:08:08.168 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:08.168 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:08.425 true 00:08:08.425 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2154184 00:08:08.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2154184) - No such process 00:08:08.425 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2154184 00:08:08.425 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.683 17:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.940 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:08.940 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:08.940 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:08.940 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.940 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:09.198 null0 00:08:09.198 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.198 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.198 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:09.455 null1 00:08:09.455 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.455 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.455 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:09.711 null2 00:08:09.711 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.711 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.711 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:09.969 null3 00:08:09.969 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.969 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.969 17:31:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:10.226 null4 00:08:10.226 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.226 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.226 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:10.484 null5 00:08:10.484 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.484 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.484 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:10.742 null6 00:08:10.742 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.742 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.742 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:11.001 null7 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.001 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2158239 2158240 2158242 2158244 2158246 2158248 2158250 2158252 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.002 17:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.261 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.520 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.778 17:31:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.036 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.037 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.037 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.037 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.037 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.295 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.553 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.811 17:31:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.070 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.328 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.329 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.587 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.845 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.845 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.846 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.104 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.104 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.104 17:31:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.104 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.104 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.104 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.104 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.104 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.362 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.619 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.876 17:31:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.134 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.392 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.650 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.908 17:31:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.165 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.423 rmmod nvme_tcp 00:08:16.423 rmmod nvme_fabrics 00:08:16.423 rmmod nvme_keyring 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2153758 ']' 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2153758 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2153758 ']' 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2153758 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.423 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2153758 00:08:16.681 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:16.681 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:16.681 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2153758' 00:08:16.681 killing process with pid 2153758 00:08:16.681 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2153758 00:08:16.681 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2153758 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.940 17:31:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.852 17:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.852 00:08:18.852 real 0m46.579s 00:08:18.852 user 3m32.091s 00:08:18.852 sys 0m16.316s 00:08:18.852 17:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.852 17:31:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.852 ************************************ 00:08:18.852 END TEST nvmf_ns_hotplug_stress 00:08:18.852 ************************************ 00:08:18.852 17:31:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.852 17:31:13 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:18.852 17:31:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.852 17:31:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.852 17:31:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.852 ************************************ 00:08:18.852 START TEST nvmf_connect_stress 00:08:18.852 ************************************ 00:08:18.852 17:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:18.852 * Looking for test storage... 00:08:19.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.116 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.117 17:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.019 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.019 17:31:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:21.019 00:08:21.019 --- 10.0.0.2 ping statistics --- 00:08:21.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.019 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:21.019 00:08:21.019 --- 10.0.0.1 ping statistics --- 00:08:21.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.019 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.019 17:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2160999 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2160999 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2160999 ']' 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.020 17:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.020 [2024-07-15 17:31:16.091448] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:21.020 [2024-07-15 17:31:16.091515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.020 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.278 [2024-07-15 17:31:16.157571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.278 [2024-07-15 17:31:16.273832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.278 [2024-07-15 17:31:16.273912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.278 [2024-07-15 17:31:16.273937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.278 [2024-07-15 17:31:16.273951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.278 [2024-07-15 17:31:16.273963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.278 [2024-07-15 17:31:16.274066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.278 [2024-07-15 17:31:16.274162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.278 [2024-07-15 17:31:16.274166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.212 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 [2024-07-15 17:31:17.050396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 [2024-07-15 17:31:17.075999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 NULL1 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2161152 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.471 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.471 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:22.471 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.471 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.471 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.727 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.727 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:22.727 17:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.727 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.727 17:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.983 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.983 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:22.983 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.983 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.983 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.547 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.547 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:23.547 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.547 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.547 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.804 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.804 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:23.804 17:31:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.804 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.804 17:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.061 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:24.062 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.062 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.062 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.337 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.337 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:24.337 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.337 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.337 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.594 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.594 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:24.594 17:31:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.594 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.594 17:31:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.157 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.157 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:25.157 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.157 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.157 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.414 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.414 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:25.414 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.414 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.414 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.671 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.671 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:25.671 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.671 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.671 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.928 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.928 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:25.928 17:31:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.928 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.928 17:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.186 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.186 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:26.186 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.186 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.186 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.752 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.752 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:26.752 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.752 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.752 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.009 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.009 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:27.009 17:31:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.009 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.009 17:31:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.267 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:27.267 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.267 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.267 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.525 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.525 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:27.525 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.525 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.525 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.782 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.782 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:27.782 17:31:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.782 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.782 17:31:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.347 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.347 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:28.347 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.347 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.347 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.604 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.604 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:28.604 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.604 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.604 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:28.861 17:31:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.861 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 17:31:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.119 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.119 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:29.119 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.119 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.119 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.684 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.684 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:29.684 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.684 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.684 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.941 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:29.941 17:31:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.941 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.941 17:31:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.199 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.199 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:30.199 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.199 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.199 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.492 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.492 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:30.492 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.492 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.492 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.749 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.749 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:30.749 17:31:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.749 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.749 17:31:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.007 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.007 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:31.007 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.007 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.007 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.572 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.572 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:31.572 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.572 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.572 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.829 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.829 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:31.830 17:31:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.830 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.830 17:31:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.086 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.086 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:32.087 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.087 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.087 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.087 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2161152 00:08:32.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2161152) - No such process 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2161152 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.345 rmmod nvme_tcp 00:08:32.345 rmmod nvme_fabrics 00:08:32.345 rmmod nvme_keyring 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2160999 ']' 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2160999 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2160999 ']' 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2160999 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.345 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2160999 00:08:32.603 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:32.603 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:32.603 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2160999' 00:08:32.603 killing process with pid 2160999 00:08:32.603 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2160999 00:08:32.603 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2160999 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.863 17:31:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.767 17:31:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.767 00:08:34.767 real 0m15.872s 00:08:34.767 user 0m40.350s 00:08:34.767 sys 0m5.846s 00:08:34.767 17:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.767 17:31:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.767 ************************************ 00:08:34.767 END TEST nvmf_connect_stress 00:08:34.767 ************************************ 00:08:34.767 17:31:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:34.767 17:31:29 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.767 17:31:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.767 17:31:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.767 17:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.767 ************************************ 00:08:34.767 START TEST nvmf_fused_ordering 00:08:34.767 ************************************ 00:08:34.767 17:31:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.767 * Looking for test storage... 00:08:35.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.026 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.027 17:31:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.929 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.930 17:31:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.930 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.930 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.930 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.930 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.930 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.189 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.189 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:08:37.189 00:08:37.189 --- 10.0.0.2 ping statistics --- 00:08:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.189 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:37.189 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:37.190 00:08:37.190 --- 10.0.0.1 ping statistics --- 00:08:37.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.190 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2164308 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2164308 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2164308 ']' 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.190 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.190 [2024-07-15 17:31:32.155220] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:37.190 [2024-07-15 17:31:32.155301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.190 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.190 [2024-07-15 17:31:32.226685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.448 [2024-07-15 17:31:32.347826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.448 [2024-07-15 17:31:32.347892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.448 [2024-07-15 17:31:32.347911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.448 [2024-07-15 17:31:32.347938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.448 [2024-07-15 17:31:32.347948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.448 [2024-07-15 17:31:32.347982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 [2024-07-15 17:31:32.498680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 [2024-07-15 17:31:32.514897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 NULL1 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.448 17:31:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:37.448 [2024-07-15 17:31:32.562051] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:37.449 [2024-07-15 17:31:32.562094] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164328 ] 00:08:37.706 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.272 Attached to nqn.2016-06.io.spdk:cnode1 00:08:38.272 Namespace ID: 1 size: 1GB 00:08:38.272 fused_ordering(0) 00:08:38.272 fused_ordering(1) 00:08:38.272 fused_ordering(2) 00:08:38.273 fused_ordering(3) 00:08:38.273 fused_ordering(4) 00:08:38.273 fused_ordering(5) 00:08:38.273 fused_ordering(6) 00:08:38.273 fused_ordering(7) 00:08:38.273 fused_ordering(8) 00:08:38.273 fused_ordering(9) 00:08:38.273 fused_ordering(10) 00:08:38.273 fused_ordering(11) 00:08:38.273 fused_ordering(12) 00:08:38.273 fused_ordering(13) 00:08:38.273 fused_ordering(14) 00:08:38.273 fused_ordering(15) 00:08:38.273 fused_ordering(16) 00:08:38.273 fused_ordering(17) 00:08:38.273 fused_ordering(18) 00:08:38.273 fused_ordering(19) 00:08:38.273 fused_ordering(20) 00:08:38.273 fused_ordering(21) 00:08:38.273 fused_ordering(22) 00:08:38.273 fused_ordering(23) 00:08:38.273 fused_ordering(24) 00:08:38.273 fused_ordering(25) 00:08:38.273 fused_ordering(26) 00:08:38.273 fused_ordering(27) 00:08:38.273 fused_ordering(28) 00:08:38.273 fused_ordering(29) 00:08:38.273 fused_ordering(30) 00:08:38.273 fused_ordering(31) 00:08:38.273 fused_ordering(32) 00:08:38.273 fused_ordering(33) 00:08:38.273 fused_ordering(34) 00:08:38.273 fused_ordering(35) 00:08:38.273 fused_ordering(36) 00:08:38.273 fused_ordering(37) 00:08:38.273 fused_ordering(38) 00:08:38.273 fused_ordering(39) 00:08:38.273 fused_ordering(40) 00:08:38.273 fused_ordering(41) 00:08:38.273 fused_ordering(42) 00:08:38.273 fused_ordering(43) 00:08:38.273 fused_ordering(44) 00:08:38.273 fused_ordering(45) 00:08:38.273 fused_ordering(46) 00:08:38.273 fused_ordering(47) 00:08:38.273 fused_ordering(48) 00:08:38.273 fused_ordering(49) 00:08:38.273 fused_ordering(50) 00:08:38.273 fused_ordering(51) 00:08:38.273 fused_ordering(52) 00:08:38.273 fused_ordering(53) 00:08:38.273 fused_ordering(54) 00:08:38.273 fused_ordering(55) 00:08:38.273 fused_ordering(56) 00:08:38.273 fused_ordering(57) 00:08:38.273 fused_ordering(58) 00:08:38.273 fused_ordering(59) 00:08:38.273 fused_ordering(60) 00:08:38.273 fused_ordering(61) 00:08:38.273 fused_ordering(62) 00:08:38.273 fused_ordering(63) 00:08:38.273 fused_ordering(64) 00:08:38.273 fused_ordering(65) 00:08:38.273 fused_ordering(66) 00:08:38.273 fused_ordering(67) 00:08:38.273 fused_ordering(68) 00:08:38.273 fused_ordering(69) 00:08:38.273 fused_ordering(70) 00:08:38.273 fused_ordering(71) 00:08:38.273 fused_ordering(72) 00:08:38.273 fused_ordering(73) 00:08:38.273 fused_ordering(74) 00:08:38.273 fused_ordering(75) 00:08:38.273 fused_ordering(76) 00:08:38.273 fused_ordering(77) 00:08:38.273 fused_ordering(78) 00:08:38.273 fused_ordering(79) 00:08:38.273 fused_ordering(80) 00:08:38.273 fused_ordering(81) 00:08:38.273 fused_ordering(82) 00:08:38.273 fused_ordering(83) 00:08:38.273 fused_ordering(84) 00:08:38.273 fused_ordering(85) 00:08:38.273 fused_ordering(86) 00:08:38.273 fused_ordering(87) 00:08:38.273 fused_ordering(88) 00:08:38.273 fused_ordering(89) 00:08:38.273 fused_ordering(90) 00:08:38.273 fused_ordering(91) 00:08:38.273 fused_ordering(92) 00:08:38.273 fused_ordering(93) 00:08:38.273 fused_ordering(94) 00:08:38.273 fused_ordering(95) 00:08:38.273 fused_ordering(96) 00:08:38.273 fused_ordering(97) 00:08:38.273 fused_ordering(98) 00:08:38.273 fused_ordering(99) 00:08:38.273 fused_ordering(100) 00:08:38.273 fused_ordering(101) 00:08:38.273 fused_ordering(102) 00:08:38.273 fused_ordering(103) 00:08:38.273 fused_ordering(104) 00:08:38.273 fused_ordering(105) 00:08:38.273 fused_ordering(106) 00:08:38.273 fused_ordering(107) 00:08:38.273 fused_ordering(108) 00:08:38.273 fused_ordering(109) 00:08:38.273 fused_ordering(110) 00:08:38.273 fused_ordering(111) 00:08:38.273 fused_ordering(112) 00:08:38.273 fused_ordering(113) 00:08:38.273 fused_ordering(114) 00:08:38.273 fused_ordering(115) 00:08:38.273 fused_ordering(116) 00:08:38.273 fused_ordering(117) 00:08:38.273 fused_ordering(118) 00:08:38.273 fused_ordering(119) 00:08:38.273 fused_ordering(120) 00:08:38.273 fused_ordering(121) 00:08:38.273 fused_ordering(122) 00:08:38.273 fused_ordering(123) 00:08:38.273 fused_ordering(124) 00:08:38.273 fused_ordering(125) 00:08:38.273 fused_ordering(126) 00:08:38.273 fused_ordering(127) 00:08:38.273 fused_ordering(128) 00:08:38.273 fused_ordering(129) 00:08:38.273 fused_ordering(130) 00:08:38.273 fused_ordering(131) 00:08:38.273 fused_ordering(132) 00:08:38.273 fused_ordering(133) 00:08:38.273 fused_ordering(134) 00:08:38.273 fused_ordering(135) 00:08:38.273 fused_ordering(136) 00:08:38.273 fused_ordering(137) 00:08:38.273 fused_ordering(138) 00:08:38.273 fused_ordering(139) 00:08:38.273 fused_ordering(140) 00:08:38.273 fused_ordering(141) 00:08:38.273 fused_ordering(142) 00:08:38.273 fused_ordering(143) 00:08:38.273 fused_ordering(144) 00:08:38.273 fused_ordering(145) 00:08:38.273 fused_ordering(146) 00:08:38.273 fused_ordering(147) 00:08:38.273 fused_ordering(148) 00:08:38.273 fused_ordering(149) 00:08:38.273 fused_ordering(150) 00:08:38.273 fused_ordering(151) 00:08:38.273 fused_ordering(152) 00:08:38.273 fused_ordering(153) 00:08:38.273 fused_ordering(154) 00:08:38.273 fused_ordering(155) 00:08:38.273 fused_ordering(156) 00:08:38.273 fused_ordering(157) 00:08:38.273 fused_ordering(158) 00:08:38.273 fused_ordering(159) 00:08:38.273 fused_ordering(160) 00:08:38.273 fused_ordering(161) 00:08:38.273 fused_ordering(162) 00:08:38.273 fused_ordering(163) 00:08:38.273 fused_ordering(164) 00:08:38.273 fused_ordering(165) 00:08:38.273 fused_ordering(166) 00:08:38.273 fused_ordering(167) 00:08:38.273 fused_ordering(168) 00:08:38.273 fused_ordering(169) 00:08:38.273 fused_ordering(170) 00:08:38.273 fused_ordering(171) 00:08:38.273 fused_ordering(172) 00:08:38.273 fused_ordering(173) 00:08:38.273 fused_ordering(174) 00:08:38.273 fused_ordering(175) 00:08:38.273 fused_ordering(176) 00:08:38.273 fused_ordering(177) 00:08:38.273 fused_ordering(178) 00:08:38.273 fused_ordering(179) 00:08:38.273 fused_ordering(180) 00:08:38.273 fused_ordering(181) 00:08:38.273 fused_ordering(182) 00:08:38.273 fused_ordering(183) 00:08:38.273 fused_ordering(184) 00:08:38.273 fused_ordering(185) 00:08:38.273 fused_ordering(186) 00:08:38.273 fused_ordering(187) 00:08:38.273 fused_ordering(188) 00:08:38.273 fused_ordering(189) 00:08:38.273 fused_ordering(190) 00:08:38.273 fused_ordering(191) 00:08:38.273 fused_ordering(192) 00:08:38.273 fused_ordering(193) 00:08:38.273 fused_ordering(194) 00:08:38.273 fused_ordering(195) 00:08:38.273 fused_ordering(196) 00:08:38.273 fused_ordering(197) 00:08:38.273 fused_ordering(198) 00:08:38.273 fused_ordering(199) 00:08:38.273 fused_ordering(200) 00:08:38.273 fused_ordering(201) 00:08:38.273 fused_ordering(202) 00:08:38.273 fused_ordering(203) 00:08:38.273 fused_ordering(204) 00:08:38.273 fused_ordering(205) 00:08:38.838 fused_ordering(206) 00:08:38.838 fused_ordering(207) 00:08:38.838 fused_ordering(208) 00:08:38.838 fused_ordering(209) 00:08:38.838 fused_ordering(210) 00:08:38.838 fused_ordering(211) 00:08:38.838 fused_ordering(212) 00:08:38.838 fused_ordering(213) 00:08:38.838 fused_ordering(214) 00:08:38.838 fused_ordering(215) 00:08:38.838 fused_ordering(216) 00:08:38.838 fused_ordering(217) 00:08:38.838 fused_ordering(218) 00:08:38.838 fused_ordering(219) 00:08:38.838 fused_ordering(220) 00:08:38.838 fused_ordering(221) 00:08:38.838 fused_ordering(222) 00:08:38.838 fused_ordering(223) 00:08:38.838 fused_ordering(224) 00:08:38.839 fused_ordering(225) 00:08:38.839 fused_ordering(226) 00:08:38.839 fused_ordering(227) 00:08:38.839 fused_ordering(228) 00:08:38.839 fused_ordering(229) 00:08:38.839 fused_ordering(230) 00:08:38.839 fused_ordering(231) 00:08:38.839 fused_ordering(232) 00:08:38.839 fused_ordering(233) 00:08:38.839 fused_ordering(234) 00:08:38.839 fused_ordering(235) 00:08:38.839 fused_ordering(236) 00:08:38.839 fused_ordering(237) 00:08:38.839 fused_ordering(238) 00:08:38.839 fused_ordering(239) 00:08:38.839 fused_ordering(240) 00:08:38.839 fused_ordering(241) 00:08:38.839 fused_ordering(242) 00:08:38.839 fused_ordering(243) 00:08:38.839 fused_ordering(244) 00:08:38.839 fused_ordering(245) 00:08:38.839 fused_ordering(246) 00:08:38.839 fused_ordering(247) 00:08:38.839 fused_ordering(248) 00:08:38.839 fused_ordering(249) 00:08:38.839 fused_ordering(250) 00:08:38.839 fused_ordering(251) 00:08:38.839 fused_ordering(252) 00:08:38.839 fused_ordering(253) 00:08:38.839 fused_ordering(254) 00:08:38.839 fused_ordering(255) 00:08:38.839 fused_ordering(256) 00:08:38.839 fused_ordering(257) 00:08:38.839 fused_ordering(258) 00:08:38.839 fused_ordering(259) 00:08:38.839 fused_ordering(260) 00:08:38.839 fused_ordering(261) 00:08:38.839 fused_ordering(262) 00:08:38.839 fused_ordering(263) 00:08:38.839 fused_ordering(264) 00:08:38.839 fused_ordering(265) 00:08:38.839 fused_ordering(266) 00:08:38.839 fused_ordering(267) 00:08:38.839 fused_ordering(268) 00:08:38.839 fused_ordering(269) 00:08:38.839 fused_ordering(270) 00:08:38.839 fused_ordering(271) 00:08:38.839 fused_ordering(272) 00:08:38.839 fused_ordering(273) 00:08:38.839 fused_ordering(274) 00:08:38.839 fused_ordering(275) 00:08:38.839 fused_ordering(276) 00:08:38.839 fused_ordering(277) 00:08:38.839 fused_ordering(278) 00:08:38.839 fused_ordering(279) 00:08:38.839 fused_ordering(280) 00:08:38.839 fused_ordering(281) 00:08:38.839 fused_ordering(282) 00:08:38.839 fused_ordering(283) 00:08:38.839 fused_ordering(284) 00:08:38.839 fused_ordering(285) 00:08:38.839 fused_ordering(286) 00:08:38.839 fused_ordering(287) 00:08:38.839 fused_ordering(288) 00:08:38.839 fused_ordering(289) 00:08:38.839 fused_ordering(290) 00:08:38.839 fused_ordering(291) 00:08:38.839 fused_ordering(292) 00:08:38.839 fused_ordering(293) 00:08:38.839 fused_ordering(294) 00:08:38.839 fused_ordering(295) 00:08:38.839 fused_ordering(296) 00:08:38.839 fused_ordering(297) 00:08:38.839 fused_ordering(298) 00:08:38.839 fused_ordering(299) 00:08:38.839 fused_ordering(300) 00:08:38.839 fused_ordering(301) 00:08:38.839 fused_ordering(302) 00:08:38.839 fused_ordering(303) 00:08:38.839 fused_ordering(304) 00:08:38.839 fused_ordering(305) 00:08:38.839 fused_ordering(306) 00:08:38.839 fused_ordering(307) 00:08:38.839 fused_ordering(308) 00:08:38.839 fused_ordering(309) 00:08:38.839 fused_ordering(310) 00:08:38.839 fused_ordering(311) 00:08:38.839 fused_ordering(312) 00:08:38.839 fused_ordering(313) 00:08:38.839 fused_ordering(314) 00:08:38.839 fused_ordering(315) 00:08:38.839 fused_ordering(316) 00:08:38.839 fused_ordering(317) 00:08:38.839 fused_ordering(318) 00:08:38.839 fused_ordering(319) 00:08:38.839 fused_ordering(320) 00:08:38.839 fused_ordering(321) 00:08:38.839 fused_ordering(322) 00:08:38.839 fused_ordering(323) 00:08:38.839 fused_ordering(324) 00:08:38.839 fused_ordering(325) 00:08:38.839 fused_ordering(326) 00:08:38.839 fused_ordering(327) 00:08:38.839 fused_ordering(328) 00:08:38.839 fused_ordering(329) 00:08:38.839 fused_ordering(330) 00:08:38.839 fused_ordering(331) 00:08:38.839 fused_ordering(332) 00:08:38.839 fused_ordering(333) 00:08:38.839 fused_ordering(334) 00:08:38.839 fused_ordering(335) 00:08:38.839 fused_ordering(336) 00:08:38.839 fused_ordering(337) 00:08:38.839 fused_ordering(338) 00:08:38.839 fused_ordering(339) 00:08:38.839 fused_ordering(340) 00:08:38.839 fused_ordering(341) 00:08:38.839 fused_ordering(342) 00:08:38.839 fused_ordering(343) 00:08:38.839 fused_ordering(344) 00:08:38.839 fused_ordering(345) 00:08:38.839 fused_ordering(346) 00:08:38.839 fused_ordering(347) 00:08:38.839 fused_ordering(348) 00:08:38.839 fused_ordering(349) 00:08:38.839 fused_ordering(350) 00:08:38.839 fused_ordering(351) 00:08:38.839 fused_ordering(352) 00:08:38.839 fused_ordering(353) 00:08:38.839 fused_ordering(354) 00:08:38.839 fused_ordering(355) 00:08:38.839 fused_ordering(356) 00:08:38.839 fused_ordering(357) 00:08:38.839 fused_ordering(358) 00:08:38.839 fused_ordering(359) 00:08:38.839 fused_ordering(360) 00:08:38.839 fused_ordering(361) 00:08:38.839 fused_ordering(362) 00:08:38.839 fused_ordering(363) 00:08:38.839 fused_ordering(364) 00:08:38.839 fused_ordering(365) 00:08:38.839 fused_ordering(366) 00:08:38.839 fused_ordering(367) 00:08:38.839 fused_ordering(368) 00:08:38.839 fused_ordering(369) 00:08:38.839 fused_ordering(370) 00:08:38.839 fused_ordering(371) 00:08:38.839 fused_ordering(372) 00:08:38.839 fused_ordering(373) 00:08:38.839 fused_ordering(374) 00:08:38.839 fused_ordering(375) 00:08:38.839 fused_ordering(376) 00:08:38.839 fused_ordering(377) 00:08:38.839 fused_ordering(378) 00:08:38.839 fused_ordering(379) 00:08:38.839 fused_ordering(380) 00:08:38.839 fused_ordering(381) 00:08:38.839 fused_ordering(382) 00:08:38.839 fused_ordering(383) 00:08:38.839 fused_ordering(384) 00:08:38.839 fused_ordering(385) 00:08:38.839 fused_ordering(386) 00:08:38.839 fused_ordering(387) 00:08:38.839 fused_ordering(388) 00:08:38.839 fused_ordering(389) 00:08:38.839 fused_ordering(390) 00:08:38.839 fused_ordering(391) 00:08:38.839 fused_ordering(392) 00:08:38.839 fused_ordering(393) 00:08:38.839 fused_ordering(394) 00:08:38.839 fused_ordering(395) 00:08:38.839 fused_ordering(396) 00:08:38.839 fused_ordering(397) 00:08:38.839 fused_ordering(398) 00:08:38.839 fused_ordering(399) 00:08:38.839 fused_ordering(400) 00:08:38.839 fused_ordering(401) 00:08:38.839 fused_ordering(402) 00:08:38.839 fused_ordering(403) 00:08:38.839 fused_ordering(404) 00:08:38.839 fused_ordering(405) 00:08:38.839 fused_ordering(406) 00:08:38.839 fused_ordering(407) 00:08:38.839 fused_ordering(408) 00:08:38.839 fused_ordering(409) 00:08:38.839 fused_ordering(410) 00:08:39.404 fused_ordering(411) 00:08:39.404 fused_ordering(412) 00:08:39.404 fused_ordering(413) 00:08:39.404 fused_ordering(414) 00:08:39.404 fused_ordering(415) 00:08:39.404 fused_ordering(416) 00:08:39.404 fused_ordering(417) 00:08:39.404 fused_ordering(418) 00:08:39.404 fused_ordering(419) 00:08:39.404 fused_ordering(420) 00:08:39.404 fused_ordering(421) 00:08:39.404 fused_ordering(422) 00:08:39.404 fused_ordering(423) 00:08:39.404 fused_ordering(424) 00:08:39.404 fused_ordering(425) 00:08:39.404 fused_ordering(426) 00:08:39.404 fused_ordering(427) 00:08:39.404 fused_ordering(428) 00:08:39.404 fused_ordering(429) 00:08:39.404 fused_ordering(430) 00:08:39.404 fused_ordering(431) 00:08:39.404 fused_ordering(432) 00:08:39.404 fused_ordering(433) 00:08:39.404 fused_ordering(434) 00:08:39.404 fused_ordering(435) 00:08:39.404 fused_ordering(436) 00:08:39.404 fused_ordering(437) 00:08:39.404 fused_ordering(438) 00:08:39.404 fused_ordering(439) 00:08:39.404 fused_ordering(440) 00:08:39.404 fused_ordering(441) 00:08:39.404 fused_ordering(442) 00:08:39.404 fused_ordering(443) 00:08:39.404 fused_ordering(444) 00:08:39.404 fused_ordering(445) 00:08:39.404 fused_ordering(446) 00:08:39.404 fused_ordering(447) 00:08:39.404 fused_ordering(448) 00:08:39.404 fused_ordering(449) 00:08:39.404 fused_ordering(450) 00:08:39.404 fused_ordering(451) 00:08:39.404 fused_ordering(452) 00:08:39.404 fused_ordering(453) 00:08:39.404 fused_ordering(454) 00:08:39.404 fused_ordering(455) 00:08:39.404 fused_ordering(456) 00:08:39.404 fused_ordering(457) 00:08:39.404 fused_ordering(458) 00:08:39.404 fused_ordering(459) 00:08:39.404 fused_ordering(460) 00:08:39.404 fused_ordering(461) 00:08:39.404 fused_ordering(462) 00:08:39.404 fused_ordering(463) 00:08:39.404 fused_ordering(464) 00:08:39.404 fused_ordering(465) 00:08:39.404 fused_ordering(466) 00:08:39.404 fused_ordering(467) 00:08:39.404 fused_ordering(468) 00:08:39.404 fused_ordering(469) 00:08:39.404 fused_ordering(470) 00:08:39.404 fused_ordering(471) 00:08:39.404 fused_ordering(472) 00:08:39.404 fused_ordering(473) 00:08:39.404 fused_ordering(474) 00:08:39.404 fused_ordering(475) 00:08:39.404 fused_ordering(476) 00:08:39.404 fused_ordering(477) 00:08:39.404 fused_ordering(478) 00:08:39.404 fused_ordering(479) 00:08:39.404 fused_ordering(480) 00:08:39.404 fused_ordering(481) 00:08:39.404 fused_ordering(482) 00:08:39.404 fused_ordering(483) 00:08:39.404 fused_ordering(484) 00:08:39.404 fused_ordering(485) 00:08:39.404 fused_ordering(486) 00:08:39.404 fused_ordering(487) 00:08:39.404 fused_ordering(488) 00:08:39.404 fused_ordering(489) 00:08:39.404 fused_ordering(490) 00:08:39.404 fused_ordering(491) 00:08:39.404 fused_ordering(492) 00:08:39.404 fused_ordering(493) 00:08:39.404 fused_ordering(494) 00:08:39.404 fused_ordering(495) 00:08:39.404 fused_ordering(496) 00:08:39.404 fused_ordering(497) 00:08:39.404 fused_ordering(498) 00:08:39.404 fused_ordering(499) 00:08:39.404 fused_ordering(500) 00:08:39.404 fused_ordering(501) 00:08:39.404 fused_ordering(502) 00:08:39.404 fused_ordering(503) 00:08:39.404 fused_ordering(504) 00:08:39.404 fused_ordering(505) 00:08:39.404 fused_ordering(506) 00:08:39.404 fused_ordering(507) 00:08:39.404 fused_ordering(508) 00:08:39.404 fused_ordering(509) 00:08:39.404 fused_ordering(510) 00:08:39.404 fused_ordering(511) 00:08:39.404 fused_ordering(512) 00:08:39.404 fused_ordering(513) 00:08:39.404 fused_ordering(514) 00:08:39.404 fused_ordering(515) 00:08:39.404 fused_ordering(516) 00:08:39.404 fused_ordering(517) 00:08:39.404 fused_ordering(518) 00:08:39.404 fused_ordering(519) 00:08:39.404 fused_ordering(520) 00:08:39.404 fused_ordering(521) 00:08:39.404 fused_ordering(522) 00:08:39.404 fused_ordering(523) 00:08:39.404 fused_ordering(524) 00:08:39.404 fused_ordering(525) 00:08:39.404 fused_ordering(526) 00:08:39.404 fused_ordering(527) 00:08:39.404 fused_ordering(528) 00:08:39.404 fused_ordering(529) 00:08:39.404 fused_ordering(530) 00:08:39.404 fused_ordering(531) 00:08:39.404 fused_ordering(532) 00:08:39.404 fused_ordering(533) 00:08:39.404 fused_ordering(534) 00:08:39.404 fused_ordering(535) 00:08:39.404 fused_ordering(536) 00:08:39.404 fused_ordering(537) 00:08:39.404 fused_ordering(538) 00:08:39.404 fused_ordering(539) 00:08:39.404 fused_ordering(540) 00:08:39.404 fused_ordering(541) 00:08:39.404 fused_ordering(542) 00:08:39.404 fused_ordering(543) 00:08:39.404 fused_ordering(544) 00:08:39.404 fused_ordering(545) 00:08:39.404 fused_ordering(546) 00:08:39.404 fused_ordering(547) 00:08:39.404 fused_ordering(548) 00:08:39.404 fused_ordering(549) 00:08:39.404 fused_ordering(550) 00:08:39.404 fused_ordering(551) 00:08:39.404 fused_ordering(552) 00:08:39.404 fused_ordering(553) 00:08:39.404 fused_ordering(554) 00:08:39.404 fused_ordering(555) 00:08:39.404 fused_ordering(556) 00:08:39.404 fused_ordering(557) 00:08:39.404 fused_ordering(558) 00:08:39.404 fused_ordering(559) 00:08:39.404 fused_ordering(560) 00:08:39.404 fused_ordering(561) 00:08:39.404 fused_ordering(562) 00:08:39.404 fused_ordering(563) 00:08:39.404 fused_ordering(564) 00:08:39.404 fused_ordering(565) 00:08:39.404 fused_ordering(566) 00:08:39.404 fused_ordering(567) 00:08:39.404 fused_ordering(568) 00:08:39.404 fused_ordering(569) 00:08:39.404 fused_ordering(570) 00:08:39.404 fused_ordering(571) 00:08:39.404 fused_ordering(572) 00:08:39.404 fused_ordering(573) 00:08:39.404 fused_ordering(574) 00:08:39.404 fused_ordering(575) 00:08:39.404 fused_ordering(576) 00:08:39.404 fused_ordering(577) 00:08:39.405 fused_ordering(578) 00:08:39.405 fused_ordering(579) 00:08:39.405 fused_ordering(580) 00:08:39.405 fused_ordering(581) 00:08:39.405 fused_ordering(582) 00:08:39.405 fused_ordering(583) 00:08:39.405 fused_ordering(584) 00:08:39.405 fused_ordering(585) 00:08:39.405 fused_ordering(586) 00:08:39.405 fused_ordering(587) 00:08:39.405 fused_ordering(588) 00:08:39.405 fused_ordering(589) 00:08:39.405 fused_ordering(590) 00:08:39.405 fused_ordering(591) 00:08:39.405 fused_ordering(592) 00:08:39.405 fused_ordering(593) 00:08:39.405 fused_ordering(594) 00:08:39.405 fused_ordering(595) 00:08:39.405 fused_ordering(596) 00:08:39.405 fused_ordering(597) 00:08:39.405 fused_ordering(598) 00:08:39.405 fused_ordering(599) 00:08:39.405 fused_ordering(600) 00:08:39.405 fused_ordering(601) 00:08:39.405 fused_ordering(602) 00:08:39.405 fused_ordering(603) 00:08:39.405 fused_ordering(604) 00:08:39.405 fused_ordering(605) 00:08:39.405 fused_ordering(606) 00:08:39.405 fused_ordering(607) 00:08:39.405 fused_ordering(608) 00:08:39.405 fused_ordering(609) 00:08:39.405 fused_ordering(610) 00:08:39.405 fused_ordering(611) 00:08:39.405 fused_ordering(612) 00:08:39.405 fused_ordering(613) 00:08:39.405 fused_ordering(614) 00:08:39.405 fused_ordering(615) 00:08:39.969 fused_ordering(616) 00:08:39.969 fused_ordering(617) 00:08:39.969 fused_ordering(618) 00:08:39.969 fused_ordering(619) 00:08:39.969 fused_ordering(620) 00:08:39.969 fused_ordering(621) 00:08:39.969 fused_ordering(622) 00:08:39.969 fused_ordering(623) 00:08:39.969 fused_ordering(624) 00:08:39.969 fused_ordering(625) 00:08:39.969 fused_ordering(626) 00:08:39.969 fused_ordering(627) 00:08:39.969 fused_ordering(628) 00:08:39.969 fused_ordering(629) 00:08:39.969 fused_ordering(630) 00:08:39.969 fused_ordering(631) 00:08:39.969 fused_ordering(632) 00:08:39.969 fused_ordering(633) 00:08:39.969 fused_ordering(634) 00:08:39.969 fused_ordering(635) 00:08:39.969 fused_ordering(636) 00:08:39.969 fused_ordering(637) 00:08:39.969 fused_ordering(638) 00:08:39.969 fused_ordering(639) 00:08:39.969 fused_ordering(640) 00:08:39.969 fused_ordering(641) 00:08:39.969 fused_ordering(642) 00:08:39.969 fused_ordering(643) 00:08:39.969 fused_ordering(644) 00:08:39.969 fused_ordering(645) 00:08:39.969 fused_ordering(646) 00:08:39.969 fused_ordering(647) 00:08:39.969 fused_ordering(648) 00:08:39.969 fused_ordering(649) 00:08:39.969 fused_ordering(650) 00:08:39.969 fused_ordering(651) 00:08:39.969 fused_ordering(652) 00:08:39.969 fused_ordering(653) 00:08:39.969 fused_ordering(654) 00:08:39.969 fused_ordering(655) 00:08:39.969 fused_ordering(656) 00:08:39.969 fused_ordering(657) 00:08:39.969 fused_ordering(658) 00:08:39.969 fused_ordering(659) 00:08:39.969 fused_ordering(660) 00:08:39.969 fused_ordering(661) 00:08:39.969 fused_ordering(662) 00:08:39.969 fused_ordering(663) 00:08:39.969 fused_ordering(664) 00:08:39.969 fused_ordering(665) 00:08:39.969 fused_ordering(666) 00:08:39.969 fused_ordering(667) 00:08:39.969 fused_ordering(668) 00:08:39.969 fused_ordering(669) 00:08:39.969 fused_ordering(670) 00:08:39.969 fused_ordering(671) 00:08:39.969 fused_ordering(672) 00:08:39.969 fused_ordering(673) 00:08:39.969 fused_ordering(674) 00:08:39.969 fused_ordering(675) 00:08:39.969 fused_ordering(676) 00:08:39.969 fused_ordering(677) 00:08:39.969 fused_ordering(678) 00:08:39.969 fused_ordering(679) 00:08:39.969 fused_ordering(680) 00:08:39.969 fused_ordering(681) 00:08:39.969 fused_ordering(682) 00:08:39.969 fused_ordering(683) 00:08:39.969 fused_ordering(684) 00:08:39.969 fused_ordering(685) 00:08:39.969 fused_ordering(686) 00:08:39.969 fused_ordering(687) 00:08:39.969 fused_ordering(688) 00:08:39.969 fused_ordering(689) 00:08:39.969 fused_ordering(690) 00:08:39.969 fused_ordering(691) 00:08:39.969 fused_ordering(692) 00:08:39.969 fused_ordering(693) 00:08:39.969 fused_ordering(694) 00:08:39.969 fused_ordering(695) 00:08:39.969 fused_ordering(696) 00:08:39.969 fused_ordering(697) 00:08:39.969 fused_ordering(698) 00:08:39.969 fused_ordering(699) 00:08:39.969 fused_ordering(700) 00:08:39.970 fused_ordering(701) 00:08:39.970 fused_ordering(702) 00:08:39.970 fused_ordering(703) 00:08:39.970 fused_ordering(704) 00:08:39.970 fused_ordering(705) 00:08:39.970 fused_ordering(706) 00:08:39.970 fused_ordering(707) 00:08:39.970 fused_ordering(708) 00:08:39.970 fused_ordering(709) 00:08:39.970 fused_ordering(710) 00:08:39.970 fused_ordering(711) 00:08:39.970 fused_ordering(712) 00:08:39.970 fused_ordering(713) 00:08:39.970 fused_ordering(714) 00:08:39.970 fused_ordering(715) 00:08:39.970 fused_ordering(716) 00:08:39.970 fused_ordering(717) 00:08:39.970 fused_ordering(718) 00:08:39.970 fused_ordering(719) 00:08:39.970 fused_ordering(720) 00:08:39.970 fused_ordering(721) 00:08:39.970 fused_ordering(722) 00:08:39.970 fused_ordering(723) 00:08:39.970 fused_ordering(724) 00:08:39.970 fused_ordering(725) 00:08:39.970 fused_ordering(726) 00:08:39.970 fused_ordering(727) 00:08:39.970 fused_ordering(728) 00:08:39.970 fused_ordering(729) 00:08:39.970 fused_ordering(730) 00:08:39.970 fused_ordering(731) 00:08:39.970 fused_ordering(732) 00:08:39.970 fused_ordering(733) 00:08:39.970 fused_ordering(734) 00:08:39.970 fused_ordering(735) 00:08:39.970 fused_ordering(736) 00:08:39.970 fused_ordering(737) 00:08:39.970 fused_ordering(738) 00:08:39.970 fused_ordering(739) 00:08:39.970 fused_ordering(740) 00:08:39.970 fused_ordering(741) 00:08:39.970 fused_ordering(742) 00:08:39.970 fused_ordering(743) 00:08:39.970 fused_ordering(744) 00:08:39.970 fused_ordering(745) 00:08:39.970 fused_ordering(746) 00:08:39.970 fused_ordering(747) 00:08:39.970 fused_ordering(748) 00:08:39.970 fused_ordering(749) 00:08:39.970 fused_ordering(750) 00:08:39.970 fused_ordering(751) 00:08:39.970 fused_ordering(752) 00:08:39.970 fused_ordering(753) 00:08:39.970 fused_ordering(754) 00:08:39.970 fused_ordering(755) 00:08:39.970 fused_ordering(756) 00:08:39.970 fused_ordering(757) 00:08:39.970 fused_ordering(758) 00:08:39.970 fused_ordering(759) 00:08:39.970 fused_ordering(760) 00:08:39.970 fused_ordering(761) 00:08:39.970 fused_ordering(762) 00:08:39.970 fused_ordering(763) 00:08:39.970 fused_ordering(764) 00:08:39.970 fused_ordering(765) 00:08:39.970 fused_ordering(766) 00:08:39.970 fused_ordering(767) 00:08:39.970 fused_ordering(768) 00:08:39.970 fused_ordering(769) 00:08:39.970 fused_ordering(770) 00:08:39.970 fused_ordering(771) 00:08:39.970 fused_ordering(772) 00:08:39.970 fused_ordering(773) 00:08:39.970 fused_ordering(774) 00:08:39.970 fused_ordering(775) 00:08:39.970 fused_ordering(776) 00:08:39.970 fused_ordering(777) 00:08:39.970 fused_ordering(778) 00:08:39.970 fused_ordering(779) 00:08:39.970 fused_ordering(780) 00:08:39.970 fused_ordering(781) 00:08:39.970 fused_ordering(782) 00:08:39.970 fused_ordering(783) 00:08:39.970 fused_ordering(784) 00:08:39.970 fused_ordering(785) 00:08:39.970 fused_ordering(786) 00:08:39.970 fused_ordering(787) 00:08:39.970 fused_ordering(788) 00:08:39.970 fused_ordering(789) 00:08:39.970 fused_ordering(790) 00:08:39.970 fused_ordering(791) 00:08:39.970 fused_ordering(792) 00:08:39.970 fused_ordering(793) 00:08:39.970 fused_ordering(794) 00:08:39.970 fused_ordering(795) 00:08:39.970 fused_ordering(796) 00:08:39.970 fused_ordering(797) 00:08:39.970 fused_ordering(798) 00:08:39.970 fused_ordering(799) 00:08:39.970 fused_ordering(800) 00:08:39.970 fused_ordering(801) 00:08:39.970 fused_ordering(802) 00:08:39.970 fused_ordering(803) 00:08:39.970 fused_ordering(804) 00:08:39.970 fused_ordering(805) 00:08:39.970 fused_ordering(806) 00:08:39.970 fused_ordering(807) 00:08:39.970 fused_ordering(808) 00:08:39.970 fused_ordering(809) 00:08:39.970 fused_ordering(810) 00:08:39.970 fused_ordering(811) 00:08:39.970 fused_ordering(812) 00:08:39.970 fused_ordering(813) 00:08:39.970 fused_ordering(814) 00:08:39.970 fused_ordering(815) 00:08:39.970 fused_ordering(816) 00:08:39.970 fused_ordering(817) 00:08:39.970 fused_ordering(818) 00:08:39.970 fused_ordering(819) 00:08:39.970 fused_ordering(820) 00:08:40.904 fused_ordering(821) 00:08:40.904 fused_ordering(822) 00:08:40.904 fused_ordering(823) 00:08:40.904 fused_ordering(824) 00:08:40.904 fused_ordering(825) 00:08:40.904 fused_ordering(826) 00:08:40.904 fused_ordering(827) 00:08:40.904 fused_ordering(828) 00:08:40.904 fused_ordering(829) 00:08:40.904 fused_ordering(830) 00:08:40.904 fused_ordering(831) 00:08:40.904 fused_ordering(832) 00:08:40.904 fused_ordering(833) 00:08:40.904 fused_ordering(834) 00:08:40.904 fused_ordering(835) 00:08:40.904 fused_ordering(836) 00:08:40.904 fused_ordering(837) 00:08:40.904 fused_ordering(838) 00:08:40.904 fused_ordering(839) 00:08:40.904 fused_ordering(840) 00:08:40.904 fused_ordering(841) 00:08:40.904 fused_ordering(842) 00:08:40.904 fused_ordering(843) 00:08:40.904 fused_ordering(844) 00:08:40.904 fused_ordering(845) 00:08:40.904 fused_ordering(846) 00:08:40.904 fused_ordering(847) 00:08:40.904 fused_ordering(848) 00:08:40.904 fused_ordering(849) 00:08:40.904 fused_ordering(850) 00:08:40.904 fused_ordering(851) 00:08:40.904 fused_ordering(852) 00:08:40.904 fused_ordering(853) 00:08:40.904 fused_ordering(854) 00:08:40.904 fused_ordering(855) 00:08:40.904 fused_ordering(856) 00:08:40.904 fused_ordering(857) 00:08:40.904 fused_ordering(858) 00:08:40.904 fused_ordering(859) 00:08:40.904 fused_ordering(860) 00:08:40.904 fused_ordering(861) 00:08:40.904 fused_ordering(862) 00:08:40.904 fused_ordering(863) 00:08:40.904 fused_ordering(864) 00:08:40.904 fused_ordering(865) 00:08:40.904 fused_ordering(866) 00:08:40.904 fused_ordering(867) 00:08:40.904 fused_ordering(868) 00:08:40.904 fused_ordering(869) 00:08:40.904 fused_ordering(870) 00:08:40.904 fused_ordering(871) 00:08:40.904 fused_ordering(872) 00:08:40.904 fused_ordering(873) 00:08:40.904 fused_ordering(874) 00:08:40.904 fused_ordering(875) 00:08:40.904 fused_ordering(876) 00:08:40.904 fused_ordering(877) 00:08:40.904 fused_ordering(878) 00:08:40.904 fused_ordering(879) 00:08:40.904 fused_ordering(880) 00:08:40.904 fused_ordering(881) 00:08:40.904 fused_ordering(882) 00:08:40.904 fused_ordering(883) 00:08:40.904 fused_ordering(884) 00:08:40.904 fused_ordering(885) 00:08:40.904 fused_ordering(886) 00:08:40.904 fused_ordering(887) 00:08:40.904 fused_ordering(888) 00:08:40.904 fused_ordering(889) 00:08:40.904 fused_ordering(890) 00:08:40.904 fused_ordering(891) 00:08:40.904 fused_ordering(892) 00:08:40.904 fused_ordering(893) 00:08:40.904 fused_ordering(894) 00:08:40.904 fused_ordering(895) 00:08:40.904 fused_ordering(896) 00:08:40.904 fused_ordering(897) 00:08:40.904 fused_ordering(898) 00:08:40.904 fused_ordering(899) 00:08:40.904 fused_ordering(900) 00:08:40.904 fused_ordering(901) 00:08:40.904 fused_ordering(902) 00:08:40.904 fused_ordering(903) 00:08:40.904 fused_ordering(904) 00:08:40.904 fused_ordering(905) 00:08:40.904 fused_ordering(906) 00:08:40.904 fused_ordering(907) 00:08:40.904 fused_ordering(908) 00:08:40.904 fused_ordering(909) 00:08:40.904 fused_ordering(910) 00:08:40.904 fused_ordering(911) 00:08:40.904 fused_ordering(912) 00:08:40.904 fused_ordering(913) 00:08:40.904 fused_ordering(914) 00:08:40.904 fused_ordering(915) 00:08:40.904 fused_ordering(916) 00:08:40.904 fused_ordering(917) 00:08:40.904 fused_ordering(918) 00:08:40.904 fused_ordering(919) 00:08:40.904 fused_ordering(920) 00:08:40.905 fused_ordering(921) 00:08:40.905 fused_ordering(922) 00:08:40.905 fused_ordering(923) 00:08:40.905 fused_ordering(924) 00:08:40.905 fused_ordering(925) 00:08:40.905 fused_ordering(926) 00:08:40.905 fused_ordering(927) 00:08:40.905 fused_ordering(928) 00:08:40.905 fused_ordering(929) 00:08:40.905 fused_ordering(930) 00:08:40.905 fused_ordering(931) 00:08:40.905 fused_ordering(932) 00:08:40.905 fused_ordering(933) 00:08:40.905 fused_ordering(934) 00:08:40.905 fused_ordering(935) 00:08:40.905 fused_ordering(936) 00:08:40.905 fused_ordering(937) 00:08:40.905 fused_ordering(938) 00:08:40.905 fused_ordering(939) 00:08:40.905 fused_ordering(940) 00:08:40.905 fused_ordering(941) 00:08:40.905 fused_ordering(942) 00:08:40.905 fused_ordering(943) 00:08:40.905 fused_ordering(944) 00:08:40.905 fused_ordering(945) 00:08:40.905 fused_ordering(946) 00:08:40.905 fused_ordering(947) 00:08:40.905 fused_ordering(948) 00:08:40.905 fused_ordering(949) 00:08:40.905 fused_ordering(950) 00:08:40.905 fused_ordering(951) 00:08:40.905 fused_ordering(952) 00:08:40.905 fused_ordering(953) 00:08:40.905 fused_ordering(954) 00:08:40.905 fused_ordering(955) 00:08:40.905 fused_ordering(956) 00:08:40.905 fused_ordering(957) 00:08:40.905 fused_ordering(958) 00:08:40.905 fused_ordering(959) 00:08:40.905 fused_ordering(960) 00:08:40.905 fused_ordering(961) 00:08:40.905 fused_ordering(962) 00:08:40.905 fused_ordering(963) 00:08:40.905 fused_ordering(964) 00:08:40.905 fused_ordering(965) 00:08:40.905 fused_ordering(966) 00:08:40.905 fused_ordering(967) 00:08:40.905 fused_ordering(968) 00:08:40.905 fused_ordering(969) 00:08:40.905 fused_ordering(970) 00:08:40.905 fused_ordering(971) 00:08:40.905 fused_ordering(972) 00:08:40.905 fused_ordering(973) 00:08:40.905 fused_ordering(974) 00:08:40.905 fused_ordering(975) 00:08:40.905 fused_ordering(976) 00:08:40.905 fused_ordering(977) 00:08:40.905 fused_ordering(978) 00:08:40.905 fused_ordering(979) 00:08:40.905 fused_ordering(980) 00:08:40.905 fused_ordering(981) 00:08:40.905 fused_ordering(982) 00:08:40.905 fused_ordering(983) 00:08:40.905 fused_ordering(984) 00:08:40.905 fused_ordering(985) 00:08:40.905 fused_ordering(986) 00:08:40.905 fused_ordering(987) 00:08:40.905 fused_ordering(988) 00:08:40.905 fused_ordering(989) 00:08:40.905 fused_ordering(990) 00:08:40.905 fused_ordering(991) 00:08:40.905 fused_ordering(992) 00:08:40.905 fused_ordering(993) 00:08:40.905 fused_ordering(994) 00:08:40.905 fused_ordering(995) 00:08:40.905 fused_ordering(996) 00:08:40.905 fused_ordering(997) 00:08:40.905 fused_ordering(998) 00:08:40.905 fused_ordering(999) 00:08:40.905 fused_ordering(1000) 00:08:40.905 fused_ordering(1001) 00:08:40.905 fused_ordering(1002) 00:08:40.905 fused_ordering(1003) 00:08:40.905 fused_ordering(1004) 00:08:40.905 fused_ordering(1005) 00:08:40.905 fused_ordering(1006) 00:08:40.905 fused_ordering(1007) 00:08:40.905 fused_ordering(1008) 00:08:40.905 fused_ordering(1009) 00:08:40.905 fused_ordering(1010) 00:08:40.905 fused_ordering(1011) 00:08:40.905 fused_ordering(1012) 00:08:40.905 fused_ordering(1013) 00:08:40.905 fused_ordering(1014) 00:08:40.905 fused_ordering(1015) 00:08:40.905 fused_ordering(1016) 00:08:40.905 fused_ordering(1017) 00:08:40.905 fused_ordering(1018) 00:08:40.905 fused_ordering(1019) 00:08:40.905 fused_ordering(1020) 00:08:40.905 fused_ordering(1021) 00:08:40.905 fused_ordering(1022) 00:08:40.905 fused_ordering(1023) 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.905 rmmod nvme_tcp 00:08:40.905 rmmod nvme_fabrics 00:08:40.905 rmmod nvme_keyring 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2164308 ']' 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2164308 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2164308 ']' 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2164308 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2164308 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2164308' 00:08:40.905 killing process with pid 2164308 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2164308 00:08:40.905 17:31:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2164308 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.163 17:31:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.704 17:31:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.704 00:08:43.704 real 0m8.420s 00:08:43.704 user 0m6.096s 00:08:43.704 sys 0m3.937s 00:08:43.704 17:31:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.704 17:31:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 ************************************ 00:08:43.704 END TEST nvmf_fused_ordering 00:08:43.704 ************************************ 00:08:43.704 17:31:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.704 17:31:38 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:43.704 17:31:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.704 17:31:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.704 17:31:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.704 ************************************ 00:08:43.704 START TEST nvmf_delete_subsystem 00:08:43.704 ************************************ 00:08:43.704 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:43.704 * Looking for test storage... 00:08:43.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.704 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.704 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:43.704 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.705 17:31:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.605 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:45.606 00:08:45.606 --- 10.0.0.2 ping statistics --- 00:08:45.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.606 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:45.606 00:08:45.606 --- 10.0.0.1 ping statistics --- 00:08:45.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.606 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2166673 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2166673 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2166673 ']' 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.606 17:31:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.606 [2024-07-15 17:31:40.553256] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:45.606 [2024-07-15 17:31:40.553337] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.606 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.606 [2024-07-15 17:31:40.623522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.607 [2024-07-15 17:31:40.738691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.607 [2024-07-15 17:31:40.738750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.607 [2024-07-15 17:31:40.738777] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.607 [2024-07-15 17:31:40.738790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.607 [2024-07-15 17:31:40.738801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.607 [2024-07-15 17:31:40.738909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.607 [2024-07-15 17:31:40.738917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.538 [2024-07-15 17:31:41.531442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.538 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 [2024-07-15 17:31:41.547617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 NULL1 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 Delay0 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2166815 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:46.539 17:31:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:46.539 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.539 [2024-07-15 17:31:41.632404] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:49.058 17:31:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.058 17:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.058 17:31:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 starting I/O failed: -6 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 starting I/O failed: -6 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 starting I/O failed: -6 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.058 starting I/O failed: -6 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Read completed with error (sct=0, sc=8) 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.058 starting I/O failed: -6 00:08:49.058 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 [2024-07-15 17:31:43.844330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb285c0 is same with the state(5) to be set 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 starting I/O failed: -6 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 [2024-07-15 17:31:43.845072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71c0000c00 is same with the state(5) to be set 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Write completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.059 Read completed with error (sct=0, sc=8) 00:08:49.993 [2024-07-15 17:31:44.811817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29ac0 is same with the state(5) to be set 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 [2024-07-15 17:31:44.844187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71c000cfe0 is same with the state(5) to be set 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 [2024-07-15 17:31:44.844443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f71c000d740 is same with the state(5) to be set 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 [2024-07-15 17:31:44.848219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb283e0 is same with the state(5) to be set 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 Write completed with error (sct=0, sc=8) 00:08:49.993 Read completed with error (sct=0, sc=8) 00:08:49.993 [2024-07-15 17:31:44.848778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb287a0 is same with the state(5) to be set 00:08:49.993 Initializing NVMe Controllers 00:08:49.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.993 Controller IO queue size 128, less than required. 00:08:49.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:49.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:49.993 Initialization complete. Launching workers. 00:08:49.993 ======================================================== 00:08:49.993 Latency(us) 00:08:49.993 Device Information : IOPS MiB/s Average min max 00:08:49.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.33 0.08 927350.47 577.18 1012567.33 00:08:49.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.33 0.08 923898.14 379.25 1012325.94 00:08:49.993 ======================================================== 00:08:49.993 Total : 314.66 0.15 925624.30 379.25 1012567.33 00:08:49.993 00:08:49.993 [2024-07-15 17:31:44.849195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29ac0 (9): Bad file descriptor 00:08:49.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:49.993 17:31:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.993 17:31:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:49.993 17:31:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2166815 00:08:49.993 17:31:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2166815 00:08:50.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2166815) - No such process 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2166815 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2166815 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2166815 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.251 [2024-07-15 17:31:45.372279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2167338 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:50.251 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.509 [2024-07-15 17:31:45.435249] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:50.766 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.766 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:50.766 17:31:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.330 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.330 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:51.330 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.925 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.925 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:51.925 17:31:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.490 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.490 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:52.490 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.054 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.054 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:53.054 17:31:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.312 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.312 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:53.312 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.878 Initializing NVMe Controllers 00:08:53.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.878 Controller IO queue size 128, less than required. 00:08:53.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:53.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:53.878 Initialization complete. Launching workers. 00:08:53.878 ======================================================== 00:08:53.878 Latency(us) 00:08:53.878 Device Information : IOPS MiB/s Average min max 00:08:53.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004698.19 1000228.19 1041563.17 00:08:53.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005442.65 1000260.39 1043937.23 00:08:53.878 ======================================================== 00:08:53.878 Total : 256.00 0.12 1005070.42 1000228.19 1043937.23 00:08:53.878 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2167338 00:08:53.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2167338) - No such process 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2167338 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.878 rmmod nvme_tcp 00:08:53.878 rmmod nvme_fabrics 00:08:53.878 rmmod nvme_keyring 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2166673 ']' 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2166673 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2166673 ']' 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2166673 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166673 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166673' 00:08:53.878 killing process with pid 2166673 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2166673 00:08:53.878 17:31:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2166673 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.137 17:31:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.665 17:31:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.665 00:08:56.665 real 0m12.963s 00:08:56.665 user 0m29.563s 00:08:56.665 sys 0m3.046s 00:08:56.665 17:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.665 17:31:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.665 ************************************ 00:08:56.665 END TEST nvmf_delete_subsystem 00:08:56.665 ************************************ 00:08:56.665 17:31:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:56.665 17:31:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:56.665 17:31:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:56.665 17:31:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.665 17:31:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.665 ************************************ 00:08:56.665 START TEST nvmf_ns_masking 00:08:56.665 ************************************ 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:56.666 * Looking for test storage... 00:08:56.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f71483dd-87a8-4290-81a7-6bed8730af51 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f4de9ab6-750c-45de-bfb9-ccffdcf6ff8a 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0652d81a-7bbb-45ae-9c98-84630fe8f082 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.666 17:31:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:58.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:58.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:58.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:58.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:58.567 00:08:58.567 --- 10.0.0.2 ping statistics --- 00:08:58.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.567 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:08:58.567 00:08:58.567 --- 10.0.0.1 ping statistics --- 00:08:58.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.567 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2169686 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2169686 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2169686 ']' 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.567 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:58.568 [2024-07-15 17:31:53.653710] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:08:58.568 [2024-07-15 17:31:53.653807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.568 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.825 [2024-07-15 17:31:53.728322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.825 [2024-07-15 17:31:53.847144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.825 [2024-07-15 17:31:53.847219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.825 [2024-07-15 17:31:53.847236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.825 [2024-07-15 17:31:53.847250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.825 [2024-07-15 17:31:53.847262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.825 [2024-07-15 17:31:53.847292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.083 17:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.341 [2024-07-15 17:31:54.230459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.341 17:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:59.341 17:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:59.341 17:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:59.600 Malloc1 00:08:59.600 17:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:59.858 Malloc2 00:08:59.858 17:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.116 17:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:00.374 17:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.374 [2024-07-15 17:31:55.504910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0652d81a-7bbb-45ae-9c98-84630fe8f082 -a 10.0.0.2 -s 4420 -i 4 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:00.632 17:31:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:03.157 [ 0]:0x1 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cfd243ee2ef24df7a6a5e08c892daec5 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cfd243ee2ef24df7a6a5e08c892daec5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:03.157 17:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:03.157 [ 0]:0x1 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cfd243ee2ef24df7a6a5e08c892daec5 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cfd243ee2ef24df7a6a5e08c892daec5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:03.157 [ 1]:0x2 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:03.157 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.415 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.415 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0652d81a-7bbb-45ae-9c98-84630fe8f082 -a 10.0.0.2 -s 4420 -i 4 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:03.986 17:31:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:05.887 17:32:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:06.145 [ 0]:0x2 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:06.145 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.146 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:06.146 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.146 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:06.403 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:06.403 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.403 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:06.403 [ 0]:0x1 00:09:06.403 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:06.403 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cfd243ee2ef24df7a6a5e08c892daec5 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cfd243ee2ef24df7a6a5e08c892daec5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:06.660 [ 1]:0x2 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.660 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:06.918 [ 0]:0x2 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:06.918 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:06.919 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:06.919 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:06.919 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:06.919 17:32:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.919 17:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:07.176 17:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:07.176 17:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0652d81a-7bbb-45ae-9c98-84630fe8f082 -a 10.0.0.2 -s 4420 -i 4 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:07.434 17:32:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.335 [ 0]:0x1 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.335 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cfd243ee2ef24df7a6a5e08c892daec5 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cfd243ee2ef24df7a6a5e08c892daec5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:09.593 [ 1]:0x2 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.593 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:09.852 [ 0]:0x2 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:09.852 17:32:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:10.110 [2024-07-15 17:32:05.182262] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:10.110 request: 00:09:10.110 { 00:09:10.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.110 "nsid": 2, 00:09:10.110 "host": "nqn.2016-06.io.spdk:host1", 00:09:10.110 "method": "nvmf_ns_remove_host", 00:09:10.110 "req_id": 1 00:09:10.110 } 00:09:10.110 Got JSON-RPC error response 00:09:10.110 response: 00:09:10.110 { 00:09:10.110 "code": -32602, 00:09:10.110 "message": "Invalid parameters" 00:09:10.110 } 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:10.110 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:10.369 [ 0]:0x2 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c36475a7cdca437f8edfbe1cb2ec950a 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c36475a7cdca437f8edfbe1cb2ec950a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2171184 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2171184 /var/tmp/host.sock 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2171184 ']' 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.369 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.369 [2024-07-15 17:32:05.385974] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:10.369 [2024-07-15 17:32:05.386072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171184 ] 00:09:10.369 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.369 [2024-07-15 17:32:05.452709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.627 [2024-07-15 17:32:05.572332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.885 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.886 17:32:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:10.886 17:32:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.144 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.402 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f71483dd-87a8-4290-81a7-6bed8730af51 00:09:11.402 17:32:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:11.402 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F71483DD87A8429081A76BED8730AF51 -i 00:09:11.660 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f4de9ab6-750c-45de-bfb9-ccffdcf6ff8a 00:09:11.660 17:32:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:11.660 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F4DE9AB6750C45DEBFB9CCFFDCF6FF8A -i 00:09:11.918 17:32:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:12.176 17:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:12.434 17:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:12.434 17:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:12.999 nvme0n1 00:09:12.999 17:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:12.999 17:32:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:13.257 nvme1n2 00:09:13.257 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:13.257 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:13.257 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:13.257 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:13.257 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:13.514 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:13.514 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:13.514 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:13.514 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:13.772 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f71483dd-87a8-4290-81a7-6bed8730af51 == \f\7\1\4\8\3\d\d\-\8\7\a\8\-\4\2\9\0\-\8\1\a\7\-\6\b\e\d\8\7\3\0\a\f\5\1 ]] 00:09:13.772 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:13.772 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:13.772 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f4de9ab6-750c-45de-bfb9-ccffdcf6ff8a == \f\4\d\e\9\a\b\6\-\7\5\0\c\-\4\5\d\e\-\b\f\b\9\-\c\c\f\f\d\c\f\6\f\f\8\a ]] 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2171184 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2171184 ']' 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2171184 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2171184 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2171184' 00:09:14.030 killing process with pid 2171184 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2171184 00:09:14.030 17:32:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2171184 00:09:14.596 17:32:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.854 rmmod nvme_tcp 00:09:14.854 rmmod nvme_fabrics 00:09:14.854 rmmod nvme_keyring 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2169686 ']' 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2169686 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2169686 ']' 00:09:14.854 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2169686 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2169686 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2169686' 00:09:14.855 killing process with pid 2169686 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2169686 00:09:14.855 17:32:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2169686 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.113 17:32:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.649 17:32:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.649 00:09:17.649 real 0m20.887s 00:09:17.649 user 0m27.344s 00:09:17.649 sys 0m4.128s 00:09:17.649 17:32:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.649 17:32:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:17.649 ************************************ 00:09:17.649 END TEST nvmf_ns_masking 00:09:17.649 ************************************ 00:09:17.649 17:32:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:17.649 17:32:12 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:17.649 17:32:12 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:17.649 17:32:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.649 17:32:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.649 17:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.649 ************************************ 00:09:17.649 START TEST nvmf_nvme_cli 00:09:17.649 ************************************ 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:17.649 * Looking for test storage... 00:09:17.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.649 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.650 17:32:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:19.553 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:19.553 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.553 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:19.554 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:19.554 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:19.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:09:19.554 00:09:19.554 --- 10.0.0.2 ping statistics --- 00:09:19.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.554 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:09:19.554 00:09:19.554 --- 10.0.0.1 ping statistics --- 00:09:19.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.554 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2173748 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2173748 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2173748 ']' 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.554 17:32:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:19.554 [2024-07-15 17:32:14.505063] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:19.554 [2024-07-15 17:32:14.505148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.554 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.554 [2024-07-15 17:32:14.572790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.813 [2024-07-15 17:32:14.694883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.813 [2024-07-15 17:32:14.694935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.813 [2024-07-15 17:32:14.694951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.813 [2024-07-15 17:32:14.694965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.813 [2024-07-15 17:32:14.694976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.813 [2024-07-15 17:32:14.695032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.813 [2024-07-15 17:32:14.695093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.813 [2024-07-15 17:32:14.695096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.813 [2024-07-15 17:32:14.695067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.379 [2024-07-15 17:32:15.503027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.379 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 Malloc0 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 Malloc1 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 [2024-07-15 17:32:15.585521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:20.637 00:09:20.637 Discovery Log Number of Records 2, Generation counter 2 00:09:20.637 =====Discovery Log Entry 0====== 00:09:20.637 trtype: tcp 00:09:20.637 adrfam: ipv4 00:09:20.637 subtype: current discovery subsystem 00:09:20.637 treq: not required 00:09:20.637 portid: 0 00:09:20.637 trsvcid: 4420 00:09:20.637 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:20.637 traddr: 10.0.0.2 00:09:20.637 eflags: explicit discovery connections, duplicate discovery information 00:09:20.637 sectype: none 00:09:20.637 =====Discovery Log Entry 1====== 00:09:20.637 trtype: tcp 00:09:20.637 adrfam: ipv4 00:09:20.637 subtype: nvme subsystem 00:09:20.637 treq: not required 00:09:20.637 portid: 0 00:09:20.637 trsvcid: 4420 00:09:20.637 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:20.637 traddr: 10.0.0.2 00:09:20.637 eflags: none 00:09:20.637 sectype: none 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:20.637 17:32:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:21.205 17:32:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:23.739 /dev/nvme0n1 ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:23.739 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.004 rmmod nvme_tcp 00:09:24.004 rmmod nvme_fabrics 00:09:24.004 rmmod nvme_keyring 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2173748 ']' 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2173748 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2173748 ']' 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2173748 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2173748 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2173748' 00:09:24.004 killing process with pid 2173748 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2173748 00:09:24.004 17:32:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2173748 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.262 17:32:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.820 17:32:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:26.820 00:09:26.820 real 0m9.092s 00:09:26.820 user 0m18.932s 00:09:26.820 sys 0m2.230s 00:09:26.820 17:32:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.820 17:32:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:26.820 ************************************ 00:09:26.820 END TEST nvmf_nvme_cli 00:09:26.820 ************************************ 00:09:26.820 17:32:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:26.820 17:32:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:26.820 17:32:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:26.820 17:32:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.820 17:32:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.820 17:32:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.820 ************************************ 00:09:26.820 START TEST nvmf_vfio_user 00:09:26.820 ************************************ 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:26.820 * Looking for test storage... 00:09:26.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2174729 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2174729' 00:09:26.820 Process pid: 2174729 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2174729 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2174729 ']' 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:26.820 [2024-07-15 17:32:21.542331] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:26.820 [2024-07-15 17:32:21.542410] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.820 [2024-07-15 17:32:21.600674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.820 [2024-07-15 17:32:21.708245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.820 [2024-07-15 17:32:21.708306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.820 [2024-07-15 17:32:21.708322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.820 [2024-07-15 17:32:21.708336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.820 [2024-07-15 17:32:21.708348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.820 [2024-07-15 17:32:21.708433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.820 [2024-07-15 17:32:21.708468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.820 [2024-07-15 17:32:21.708590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.820 [2024-07-15 17:32:21.708592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.820 17:32:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:26.821 17:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:27.756 17:32:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:28.015 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:28.015 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:28.015 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:28.015 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:28.015 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:28.582 Malloc1 00:09:28.582 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:28.582 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:28.840 17:32:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:29.098 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:29.098 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:29.098 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:29.356 Malloc2 00:09:29.356 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:29.614 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:29.872 17:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:30.131 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:30.131 [2024-07-15 17:32:25.243874] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:30.131 [2024-07-15 17:32:25.243944] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175155 ] 00:09:30.131 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.394 [2024-07-15 17:32:25.277567] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:30.394 [2024-07-15 17:32:25.285315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:30.394 [2024-07-15 17:32:25.285344] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffb2f29d000 00:09:30.394 [2024-07-15 17:32:25.286310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.287306] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.288919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.289316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.290322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.291330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.292333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.293339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:30.394 [2024-07-15 17:32:25.294343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:30.394 [2024-07-15 17:32:25.294363] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffb2f292000 00:09:30.394 [2024-07-15 17:32:25.295481] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:30.394 [2024-07-15 17:32:25.311526] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:30.394 [2024-07-15 17:32:25.311559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:30.394 [2024-07-15 17:32:25.316462] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:30.394 [2024-07-15 17:32:25.316518] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:30.394 [2024-07-15 17:32:25.316614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:30.394 [2024-07-15 17:32:25.316649] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:30.394 [2024-07-15 17:32:25.316666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:30.394 [2024-07-15 17:32:25.317457] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:30.394 [2024-07-15 17:32:25.317478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:30.394 [2024-07-15 17:32:25.317490] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:30.394 [2024-07-15 17:32:25.318461] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:30.394 [2024-07-15 17:32:25.318480] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:30.394 [2024-07-15 17:32:25.318494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:30.394 [2024-07-15 17:32:25.319465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:30.394 [2024-07-15 17:32:25.319484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:30.394 [2024-07-15 17:32:25.320469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:30.394 [2024-07-15 17:32:25.320489] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:30.394 [2024-07-15 17:32:25.320498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:30.394 [2024-07-15 17:32:25.320509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:30.394 [2024-07-15 17:32:25.320618] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:30.394 [2024-07-15 17:32:25.320626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:30.395 [2024-07-15 17:32:25.320635] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:30.395 [2024-07-15 17:32:25.321475] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:30.395 [2024-07-15 17:32:25.322484] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:30.395 [2024-07-15 17:32:25.323494] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:30.395 [2024-07-15 17:32:25.324489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:30.395 [2024-07-15 17:32:25.324591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:30.395 [2024-07-15 17:32:25.325508] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:30.395 [2024-07-15 17:32:25.325526] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:30.395 [2024-07-15 17:32:25.325535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:30.395 [2024-07-15 17:32:25.325576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325607] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.395 [2024-07-15 17:32:25.325618] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.395 [2024-07-15 17:32:25.325639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.325690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.325710] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:30.395 [2024-07-15 17:32:25.325722] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:30.395 [2024-07-15 17:32:25.325730] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:30.395 [2024-07-15 17:32:25.325738] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:30.395 [2024-07-15 17:32:25.325745] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:30.395 [2024-07-15 17:32:25.325753] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:30.395 [2024-07-15 17:32:25.325760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.325803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.325826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.395 [2024-07-15 17:32:25.325840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.395 [2024-07-15 17:32:25.325851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.395 [2024-07-15 17:32:25.325885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:30.395 [2024-07-15 17:32:25.325895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.325943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.325955] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:30.395 [2024-07-15 17:32:25.325964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.325991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.326084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326115] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:30.395 [2024-07-15 17:32:25.326124] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:30.395 [2024-07-15 17:32:25.326133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.326148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.326190] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:30.395 [2024-07-15 17:32:25.326208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326250] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.395 [2024-07-15 17:32:25.326258] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.395 [2024-07-15 17:32:25.326267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.326292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.326317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326344] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:30.395 [2024-07-15 17:32:25.326352] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.395 [2024-07-15 17:32:25.326361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.326390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326455] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:30.395 [2024-07-15 17:32:25.326462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:30.395 [2024-07-15 17:32:25.326471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:30.395 [2024-07-15 17:32:25.326500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:30.395 [2024-07-15 17:32:25.326519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:30.395 [2024-07-15 17:32:25.326538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326627] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:30.396 [2024-07-15 17:32:25.326637] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:30.396 [2024-07-15 17:32:25.326643] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:30.396 [2024-07-15 17:32:25.326649] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:30.396 [2024-07-15 17:32:25.326658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:30.396 [2024-07-15 17:32:25.326670] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:30.396 [2024-07-15 17:32:25.326678] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:30.396 [2024-07-15 17:32:25.326686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326697] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:30.396 [2024-07-15 17:32:25.326705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:30.396 [2024-07-15 17:32:25.326713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326725] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:30.396 [2024-07-15 17:32:25.326736] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:30.396 [2024-07-15 17:32:25.326745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:30.396 [2024-07-15 17:32:25.326757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:30.396 [2024-07-15 17:32:25.326808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:30.396 ===================================================== 00:09:30.396 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:30.396 ===================================================== 00:09:30.396 Controller Capabilities/Features 00:09:30.396 ================================ 00:09:30.396 Vendor ID: 4e58 00:09:30.396 Subsystem Vendor ID: 4e58 00:09:30.396 Serial Number: SPDK1 00:09:30.396 Model Number: SPDK bdev Controller 00:09:30.396 Firmware Version: 24.09 00:09:30.396 Recommended Arb Burst: 6 00:09:30.396 IEEE OUI Identifier: 8d 6b 50 00:09:30.396 Multi-path I/O 00:09:30.396 May have multiple subsystem ports: Yes 00:09:30.396 May have multiple controllers: Yes 00:09:30.396 Associated with SR-IOV VF: No 00:09:30.396 Max Data Transfer Size: 131072 00:09:30.396 Max Number of Namespaces: 32 00:09:30.396 Max Number of I/O Queues: 127 00:09:30.396 NVMe Specification Version (VS): 1.3 00:09:30.396 NVMe Specification Version (Identify): 1.3 00:09:30.396 Maximum Queue Entries: 256 00:09:30.396 Contiguous Queues Required: Yes 00:09:30.396 Arbitration Mechanisms Supported 00:09:30.396 Weighted Round Robin: Not Supported 00:09:30.396 Vendor Specific: Not Supported 00:09:30.396 Reset Timeout: 15000 ms 00:09:30.396 Doorbell Stride: 4 bytes 00:09:30.396 NVM Subsystem Reset: Not Supported 00:09:30.396 Command Sets Supported 00:09:30.396 NVM Command Set: Supported 00:09:30.396 Boot Partition: Not Supported 00:09:30.396 Memory Page Size Minimum: 4096 bytes 00:09:30.396 Memory Page Size Maximum: 4096 bytes 00:09:30.396 Persistent Memory Region: Not Supported 00:09:30.396 Optional Asynchronous Events Supported 00:09:30.396 Namespace Attribute Notices: Supported 00:09:30.396 Firmware Activation Notices: Not Supported 00:09:30.396 ANA Change Notices: Not Supported 00:09:30.396 PLE Aggregate Log Change Notices: Not Supported 00:09:30.396 LBA Status Info Alert Notices: Not Supported 00:09:30.396 EGE Aggregate Log Change Notices: Not Supported 00:09:30.396 Normal NVM Subsystem Shutdown event: Not Supported 00:09:30.396 Zone Descriptor Change Notices: Not Supported 00:09:30.396 Discovery Log Change Notices: Not Supported 00:09:30.396 Controller Attributes 00:09:30.396 128-bit Host Identifier: Supported 00:09:30.396 Non-Operational Permissive Mode: Not Supported 00:09:30.396 NVM Sets: Not Supported 00:09:30.396 Read Recovery Levels: Not Supported 00:09:30.396 Endurance Groups: Not Supported 00:09:30.396 Predictable Latency Mode: Not Supported 00:09:30.396 Traffic Based Keep ALive: Not Supported 00:09:30.396 Namespace Granularity: Not Supported 00:09:30.396 SQ Associations: Not Supported 00:09:30.396 UUID List: Not Supported 00:09:30.396 Multi-Domain Subsystem: Not Supported 00:09:30.396 Fixed Capacity Management: Not Supported 00:09:30.396 Variable Capacity Management: Not Supported 00:09:30.396 Delete Endurance Group: Not Supported 00:09:30.396 Delete NVM Set: Not Supported 00:09:30.396 Extended LBA Formats Supported: Not Supported 00:09:30.396 Flexible Data Placement Supported: Not Supported 00:09:30.396 00:09:30.396 Controller Memory Buffer Support 00:09:30.396 ================================ 00:09:30.396 Supported: No 00:09:30.396 00:09:30.396 Persistent Memory Region Support 00:09:30.396 ================================ 00:09:30.396 Supported: No 00:09:30.396 00:09:30.396 Admin Command Set Attributes 00:09:30.396 ============================ 00:09:30.396 Security Send/Receive: Not Supported 00:09:30.396 Format NVM: Not Supported 00:09:30.396 Firmware Activate/Download: Not Supported 00:09:30.396 Namespace Management: Not Supported 00:09:30.396 Device Self-Test: Not Supported 00:09:30.396 Directives: Not Supported 00:09:30.396 NVMe-MI: Not Supported 00:09:30.396 Virtualization Management: Not Supported 00:09:30.396 Doorbell Buffer Config: Not Supported 00:09:30.396 Get LBA Status Capability: Not Supported 00:09:30.396 Command & Feature Lockdown Capability: Not Supported 00:09:30.396 Abort Command Limit: 4 00:09:30.396 Async Event Request Limit: 4 00:09:30.396 Number of Firmware Slots: N/A 00:09:30.396 Firmware Slot 1 Read-Only: N/A 00:09:30.396 Firmware Activation Without Reset: N/A 00:09:30.396 Multiple Update Detection Support: N/A 00:09:30.396 Firmware Update Granularity: No Information Provided 00:09:30.396 Per-Namespace SMART Log: No 00:09:30.396 Asymmetric Namespace Access Log Page: Not Supported 00:09:30.396 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:30.396 Command Effects Log Page: Supported 00:09:30.396 Get Log Page Extended Data: Supported 00:09:30.396 Telemetry Log Pages: Not Supported 00:09:30.396 Persistent Event Log Pages: Not Supported 00:09:30.396 Supported Log Pages Log Page: May Support 00:09:30.396 Commands Supported & Effects Log Page: Not Supported 00:09:30.396 Feature Identifiers & Effects Log Page:May Support 00:09:30.396 NVMe-MI Commands & Effects Log Page: May Support 00:09:30.396 Data Area 4 for Telemetry Log: Not Supported 00:09:30.396 Error Log Page Entries Supported: 128 00:09:30.396 Keep Alive: Supported 00:09:30.396 Keep Alive Granularity: 10000 ms 00:09:30.396 00:09:30.396 NVM Command Set Attributes 00:09:30.396 ========================== 00:09:30.396 Submission Queue Entry Size 00:09:30.396 Max: 64 00:09:30.396 Min: 64 00:09:30.396 Completion Queue Entry Size 00:09:30.396 Max: 16 00:09:30.396 Min: 16 00:09:30.396 Number of Namespaces: 32 00:09:30.396 Compare Command: Supported 00:09:30.396 Write Uncorrectable Command: Not Supported 00:09:30.396 Dataset Management Command: Supported 00:09:30.396 Write Zeroes Command: Supported 00:09:30.396 Set Features Save Field: Not Supported 00:09:30.396 Reservations: Not Supported 00:09:30.396 Timestamp: Not Supported 00:09:30.396 Copy: Supported 00:09:30.396 Volatile Write Cache: Present 00:09:30.396 Atomic Write Unit (Normal): 1 00:09:30.396 Atomic Write Unit (PFail): 1 00:09:30.396 Atomic Compare & Write Unit: 1 00:09:30.396 Fused Compare & Write: Supported 00:09:30.396 Scatter-Gather List 00:09:30.396 SGL Command Set: Supported (Dword aligned) 00:09:30.396 SGL Keyed: Not Supported 00:09:30.396 SGL Bit Bucket Descriptor: Not Supported 00:09:30.396 SGL Metadata Pointer: Not Supported 00:09:30.396 Oversized SGL: Not Supported 00:09:30.396 SGL Metadata Address: Not Supported 00:09:30.397 SGL Offset: Not Supported 00:09:30.397 Transport SGL Data Block: Not Supported 00:09:30.397 Replay Protected Memory Block: Not Supported 00:09:30.397 00:09:30.397 Firmware Slot Information 00:09:30.397 ========================= 00:09:30.397 Active slot: 1 00:09:30.397 Slot 1 Firmware Revision: 24.09 00:09:30.397 00:09:30.397 00:09:30.397 Commands Supported and Effects 00:09:30.397 ============================== 00:09:30.397 Admin Commands 00:09:30.397 -------------- 00:09:30.397 Get Log Page (02h): Supported 00:09:30.397 Identify (06h): Supported 00:09:30.397 Abort (08h): Supported 00:09:30.397 Set Features (09h): Supported 00:09:30.397 Get Features (0Ah): Supported 00:09:30.397 Asynchronous Event Request (0Ch): Supported 00:09:30.397 Keep Alive (18h): Supported 00:09:30.397 I/O Commands 00:09:30.397 ------------ 00:09:30.397 Flush (00h): Supported LBA-Change 00:09:30.397 Write (01h): Supported LBA-Change 00:09:30.397 Read (02h): Supported 00:09:30.397 Compare (05h): Supported 00:09:30.397 Write Zeroes (08h): Supported LBA-Change 00:09:30.397 Dataset Management (09h): Supported LBA-Change 00:09:30.397 Copy (19h): Supported LBA-Change 00:09:30.397 00:09:30.397 Error Log 00:09:30.397 ========= 00:09:30.397 00:09:30.397 Arbitration 00:09:30.397 =========== 00:09:30.397 Arbitration Burst: 1 00:09:30.397 00:09:30.397 Power Management 00:09:30.397 ================ 00:09:30.397 Number of Power States: 1 00:09:30.397 Current Power State: Power State #0 00:09:30.397 Power State #0: 00:09:30.397 Max Power: 0.00 W 00:09:30.397 Non-Operational State: Operational 00:09:30.397 Entry Latency: Not Reported 00:09:30.397 Exit Latency: Not Reported 00:09:30.397 Relative Read Throughput: 0 00:09:30.397 Relative Read Latency: 0 00:09:30.397 Relative Write Throughput: 0 00:09:30.397 Relative Write Latency: 0 00:09:30.397 Idle Power: Not Reported 00:09:30.397 Active Power: Not Reported 00:09:30.397 Non-Operational Permissive Mode: Not Supported 00:09:30.397 00:09:30.397 Health Information 00:09:30.397 ================== 00:09:30.397 Critical Warnings: 00:09:30.397 Available Spare Space: OK 00:09:30.397 Temperature: OK 00:09:30.397 Device Reliability: OK 00:09:30.397 Read Only: No 00:09:30.397 Volatile Memory Backup: OK 00:09:30.397 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:30.397 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:30.397 Available Spare: 0% 00:09:30.397 Available Sp[2024-07-15 17:32:25.326960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:30.397 [2024-07-15 17:32:25.326977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:30.397 [2024-07-15 17:32:25.327027] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:30.397 [2024-07-15 17:32:25.327046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.397 [2024-07-15 17:32:25.327057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.397 [2024-07-15 17:32:25.327067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.397 [2024-07-15 17:32:25.327077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:30.397 [2024-07-15 17:32:25.330887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:30.397 [2024-07-15 17:32:25.330910] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:30.397 [2024-07-15 17:32:25.331542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:30.397 [2024-07-15 17:32:25.331618] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:30.397 [2024-07-15 17:32:25.331632] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:30.397 [2024-07-15 17:32:25.332552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:30.397 [2024-07-15 17:32:25.332577] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:30.397 [2024-07-15 17:32:25.332634] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:30.397 [2024-07-15 17:32:25.334592] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:30.397 are Threshold: 0% 00:09:30.397 Life Percentage Used: 0% 00:09:30.397 Data Units Read: 0 00:09:30.397 Data Units Written: 0 00:09:30.397 Host Read Commands: 0 00:09:30.397 Host Write Commands: 0 00:09:30.397 Controller Busy Time: 0 minutes 00:09:30.397 Power Cycles: 0 00:09:30.397 Power On Hours: 0 hours 00:09:30.397 Unsafe Shutdowns: 0 00:09:30.397 Unrecoverable Media Errors: 0 00:09:30.397 Lifetime Error Log Entries: 0 00:09:30.397 Warning Temperature Time: 0 minutes 00:09:30.397 Critical Temperature Time: 0 minutes 00:09:30.397 00:09:30.397 Number of Queues 00:09:30.397 ================ 00:09:30.397 Number of I/O Submission Queues: 127 00:09:30.397 Number of I/O Completion Queues: 127 00:09:30.397 00:09:30.397 Active Namespaces 00:09:30.397 ================= 00:09:30.397 Namespace ID:1 00:09:30.397 Error Recovery Timeout: Unlimited 00:09:30.397 Command Set Identifier: NVM (00h) 00:09:30.397 Deallocate: Supported 00:09:30.397 Deallocated/Unwritten Error: Not Supported 00:09:30.397 Deallocated Read Value: Unknown 00:09:30.397 Deallocate in Write Zeroes: Not Supported 00:09:30.397 Deallocated Guard Field: 0xFFFF 00:09:30.397 Flush: Supported 00:09:30.397 Reservation: Supported 00:09:30.397 Namespace Sharing Capabilities: Multiple Controllers 00:09:30.397 Size (in LBAs): 131072 (0GiB) 00:09:30.397 Capacity (in LBAs): 131072 (0GiB) 00:09:30.397 Utilization (in LBAs): 131072 (0GiB) 00:09:30.397 NGUID: F33C38A63F3D4A4E94CEF8A12CE4027A 00:09:30.397 UUID: f33c38a6-3f3d-4a4e-94ce-f8a12ce4027a 00:09:30.397 Thin Provisioning: Not Supported 00:09:30.397 Per-NS Atomic Units: Yes 00:09:30.397 Atomic Boundary Size (Normal): 0 00:09:30.397 Atomic Boundary Size (PFail): 0 00:09:30.397 Atomic Boundary Offset: 0 00:09:30.397 Maximum Single Source Range Length: 65535 00:09:30.397 Maximum Copy Length: 65535 00:09:30.397 Maximum Source Range Count: 1 00:09:30.397 NGUID/EUI64 Never Reused: No 00:09:30.397 Namespace Write Protected: No 00:09:30.397 Number of LBA Formats: 1 00:09:30.397 Current LBA Format: LBA Format #00 00:09:30.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:30.397 00:09:30.397 17:32:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:30.397 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.656 [2024-07-15 17:32:25.566720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:35.953 Initializing NVMe Controllers 00:09:35.953 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:35.953 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:35.953 Initialization complete. Launching workers. 00:09:35.953 ======================================================== 00:09:35.953 Latency(us) 00:09:35.953 Device Information : IOPS MiB/s Average min max 00:09:35.953 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34528.20 134.88 3708.45 1153.21 9845.78 00:09:35.953 ======================================================== 00:09:35.953 Total : 34528.20 134.88 3708.45 1153.21 9845.78 00:09:35.953 00:09:35.953 [2024-07-15 17:32:30.590575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:35.953 17:32:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:35.953 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.953 [2024-07-15 17:32:30.830656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:41.227 Initializing NVMe Controllers 00:09:41.227 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:41.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:41.227 Initialization complete. Launching workers. 00:09:41.227 ======================================================== 00:09:41.227 Latency(us) 00:09:41.227 Device Information : IOPS MiB/s Average min max 00:09:41.227 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.41 6958.61 11974.39 00:09:41.227 ======================================================== 00:09:41.227 Total : 16051.20 62.70 7984.41 6958.61 11974.39 00:09:41.227 00:09:41.227 [2024-07-15 17:32:35.868036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:41.227 17:32:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:41.227 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.227 [2024-07-15 17:32:36.088128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.501 [2024-07-15 17:32:41.158203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.501 Initializing NVMe Controllers 00:09:46.501 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.501 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:46.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:46.501 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:46.501 Initialization complete. Launching workers. 00:09:46.501 Starting thread on core 2 00:09:46.501 Starting thread on core 3 00:09:46.501 Starting thread on core 1 00:09:46.501 17:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:46.501 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.501 [2024-07-15 17:32:41.460367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:50.691 [2024-07-15 17:32:45.096128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.691 Initializing NVMe Controllers 00:09:50.691 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.691 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.691 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:50.691 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:50.691 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:50.691 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:50.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:50.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:50.691 Initialization complete. Launching workers. 00:09:50.691 Starting thread on core 1 with urgent priority queue 00:09:50.691 Starting thread on core 2 with urgent priority queue 00:09:50.691 Starting thread on core 3 with urgent priority queue 00:09:50.691 Starting thread on core 0 with urgent priority queue 00:09:50.691 SPDK bdev Controller (SPDK1 ) core 0: 1742.33 IO/s 57.39 secs/100000 ios 00:09:50.691 SPDK bdev Controller (SPDK1 ) core 1: 1830.67 IO/s 54.62 secs/100000 ios 00:09:50.691 SPDK bdev Controller (SPDK1 ) core 2: 1958.33 IO/s 51.06 secs/100000 ios 00:09:50.691 SPDK bdev Controller (SPDK1 ) core 3: 2009.67 IO/s 49.76 secs/100000 ios 00:09:50.691 ======================================================== 00:09:50.691 00:09:50.691 17:32:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:50.691 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.691 [2024-07-15 17:32:45.389343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:50.691 Initializing NVMe Controllers 00:09:50.691 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.691 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:50.691 Namespace ID: 1 size: 0GB 00:09:50.691 Initialization complete. 00:09:50.691 INFO: using host memory buffer for IO 00:09:50.691 Hello world! 00:09:50.691 [2024-07-15 17:32:45.422870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:50.691 17:32:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:50.691 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.691 [2024-07-15 17:32:45.727342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:51.658 Initializing NVMe Controllers 00:09:51.658 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:51.658 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:51.658 Initialization complete. Launching workers. 00:09:51.658 submit (in ns) avg, min, max = 8686.7, 3512.2, 4016594.4 00:09:51.658 complete (in ns) avg, min, max = 25289.9, 2064.4, 4014646.7 00:09:51.658 00:09:51.658 Submit histogram 00:09:51.658 ================ 00:09:51.658 Range in us Cumulative Count 00:09:51.658 3.508 - 3.532: 0.3474% ( 47) 00:09:51.658 3.532 - 3.556: 1.2713% ( 125) 00:09:51.658 3.556 - 3.579: 4.3980% ( 423) 00:09:51.658 3.579 - 3.603: 9.0768% ( 633) 00:09:51.658 3.603 - 3.627: 16.7049% ( 1032) 00:09:51.658 3.627 - 3.650: 25.6856% ( 1215) 00:09:51.658 3.650 - 3.674: 34.2597% ( 1160) 00:09:51.658 3.674 - 3.698: 41.3334% ( 957) 00:09:51.658 3.698 - 3.721: 48.7915% ( 1009) 00:09:51.658 3.721 - 3.745: 53.9138% ( 693) 00:09:51.658 3.745 - 3.769: 57.8313% ( 530) 00:09:51.658 3.769 - 3.793: 61.4753% ( 493) 00:09:51.658 3.793 - 3.816: 64.7276% ( 440) 00:09:51.658 3.816 - 3.840: 68.2682% ( 479) 00:09:51.658 3.840 - 3.864: 72.3039% ( 546) 00:09:51.658 3.864 - 3.887: 76.3767% ( 551) 00:09:51.658 3.887 - 3.911: 79.7250% ( 453) 00:09:51.658 3.911 - 3.935: 82.8590% ( 424) 00:09:51.658 3.935 - 3.959: 85.3574% ( 338) 00:09:51.658 3.959 - 3.982: 87.4344% ( 281) 00:09:51.658 3.982 - 4.006: 89.1640% ( 234) 00:09:51.658 4.006 - 4.030: 90.4058% ( 168) 00:09:51.658 4.030 - 4.053: 91.5737% ( 158) 00:09:51.658 4.053 - 4.077: 92.5567% ( 133) 00:09:51.658 4.077 - 4.101: 93.5250% ( 131) 00:09:51.658 4.101 - 4.124: 94.3824% ( 116) 00:09:51.658 4.124 - 4.148: 95.0107% ( 85) 00:09:51.658 4.148 - 4.172: 95.5355% ( 71) 00:09:51.658 4.172 - 4.196: 95.8755% ( 46) 00:09:51.658 4.196 - 4.219: 96.1934% ( 43) 00:09:51.658 4.219 - 4.243: 96.4077% ( 29) 00:09:51.658 4.243 - 4.267: 96.5629% ( 21) 00:09:51.658 4.267 - 4.290: 96.7108% ( 20) 00:09:51.658 4.290 - 4.314: 96.8143% ( 14) 00:09:51.658 4.314 - 4.338: 96.9399% ( 17) 00:09:51.658 4.338 - 4.361: 96.9916% ( 7) 00:09:51.658 4.361 - 4.385: 97.0803% ( 12) 00:09:51.658 4.385 - 4.409: 97.1469% ( 9) 00:09:51.658 4.409 - 4.433: 97.2282% ( 11) 00:09:51.658 4.433 - 4.456: 97.3021% ( 10) 00:09:51.658 4.456 - 4.480: 97.3390% ( 5) 00:09:51.658 4.480 - 4.504: 97.3686% ( 4) 00:09:51.658 4.504 - 4.527: 97.3760% ( 1) 00:09:51.658 4.527 - 4.551: 97.4204% ( 6) 00:09:51.658 4.551 - 4.575: 97.4499% ( 4) 00:09:51.658 4.575 - 4.599: 97.4943% ( 6) 00:09:51.658 4.599 - 4.622: 97.5091% ( 2) 00:09:51.658 4.646 - 4.670: 97.5164% ( 1) 00:09:51.658 4.670 - 4.693: 97.5386% ( 3) 00:09:51.658 4.693 - 4.717: 97.5460% ( 1) 00:09:51.658 4.717 - 4.741: 97.5534% ( 1) 00:09:51.658 4.741 - 4.764: 97.5608% ( 1) 00:09:51.658 4.764 - 4.788: 97.5904% ( 4) 00:09:51.658 4.788 - 4.812: 97.6199% ( 4) 00:09:51.658 4.812 - 4.836: 97.6643% ( 6) 00:09:51.658 4.836 - 4.859: 97.7160% ( 7) 00:09:51.658 4.859 - 4.883: 97.7456% ( 4) 00:09:51.658 4.883 - 4.907: 97.8195% ( 10) 00:09:51.658 4.907 - 4.930: 97.8343% ( 2) 00:09:51.658 4.930 - 4.954: 97.8712% ( 5) 00:09:51.658 4.954 - 4.978: 97.9082% ( 5) 00:09:51.658 4.978 - 5.001: 97.9304% ( 3) 00:09:51.658 5.001 - 5.025: 97.9673% ( 5) 00:09:51.658 5.025 - 5.049: 97.9969% ( 4) 00:09:51.658 5.049 - 5.073: 98.0265% ( 4) 00:09:51.658 5.073 - 5.096: 98.0412% ( 2) 00:09:51.658 5.096 - 5.120: 98.0782% ( 5) 00:09:51.658 5.120 - 5.144: 98.1004% ( 3) 00:09:51.658 5.144 - 5.167: 98.1152% ( 2) 00:09:51.658 5.167 - 5.191: 98.1299% ( 2) 00:09:51.658 5.239 - 5.262: 98.1595% ( 4) 00:09:51.658 5.262 - 5.286: 98.1669% ( 1) 00:09:51.658 5.286 - 5.310: 98.1817% ( 2) 00:09:51.658 5.310 - 5.333: 98.1891% ( 1) 00:09:51.658 5.333 - 5.357: 98.2039% ( 2) 00:09:51.658 5.357 - 5.381: 98.2260% ( 3) 00:09:51.658 5.381 - 5.404: 98.2334% ( 1) 00:09:51.658 5.404 - 5.428: 98.2408% ( 1) 00:09:51.658 5.476 - 5.499: 98.2482% ( 1) 00:09:51.658 5.499 - 5.523: 98.2630% ( 2) 00:09:51.658 5.547 - 5.570: 98.2852% ( 3) 00:09:51.658 5.641 - 5.665: 98.2926% ( 1) 00:09:51.658 5.807 - 5.831: 98.2999% ( 1) 00:09:51.658 5.902 - 5.926: 98.3073% ( 1) 00:09:51.658 5.950 - 5.973: 98.3221% ( 2) 00:09:51.658 5.997 - 6.021: 98.3295% ( 1) 00:09:51.658 6.021 - 6.044: 98.3369% ( 1) 00:09:51.658 6.044 - 6.068: 98.3443% ( 1) 00:09:51.658 6.068 - 6.116: 98.3591% ( 2) 00:09:51.658 6.305 - 6.353: 98.3665% ( 1) 00:09:51.658 6.447 - 6.495: 98.3739% ( 1) 00:09:51.659 6.827 - 6.874: 98.3813% ( 1) 00:09:51.659 7.159 - 7.206: 98.3886% ( 1) 00:09:51.659 7.206 - 7.253: 98.3960% ( 1) 00:09:51.659 7.253 - 7.301: 98.4034% ( 1) 00:09:51.659 7.301 - 7.348: 98.4108% ( 1) 00:09:51.659 7.348 - 7.396: 98.4182% ( 1) 00:09:51.659 7.396 - 7.443: 98.4256% ( 1) 00:09:51.659 7.490 - 7.538: 98.4404% ( 2) 00:09:51.659 7.538 - 7.585: 98.4478% ( 1) 00:09:51.659 7.585 - 7.633: 98.4626% ( 2) 00:09:51.659 7.633 - 7.680: 98.4773% ( 2) 00:09:51.659 7.680 - 7.727: 98.4921% ( 2) 00:09:51.659 7.822 - 7.870: 98.5143% ( 3) 00:09:51.659 7.917 - 7.964: 98.5217% ( 1) 00:09:51.659 7.964 - 8.012: 98.5291% ( 1) 00:09:51.659 8.012 - 8.059: 98.5365% ( 1) 00:09:51.659 8.059 - 8.107: 98.5513% ( 2) 00:09:51.659 8.154 - 8.201: 98.5660% ( 2) 00:09:51.659 8.201 - 8.249: 98.5882% ( 3) 00:09:51.659 8.296 - 8.344: 98.6030% ( 2) 00:09:51.659 8.344 - 8.391: 98.6104% ( 1) 00:09:51.659 8.439 - 8.486: 98.6252% ( 2) 00:09:51.659 8.533 - 8.581: 98.6326% ( 1) 00:09:51.659 8.676 - 8.723: 98.6400% ( 1) 00:09:51.659 8.723 - 8.770: 98.6474% ( 1) 00:09:51.659 8.770 - 8.818: 98.6547% ( 1) 00:09:51.659 8.818 - 8.865: 98.6621% ( 1) 00:09:51.659 9.102 - 9.150: 98.6769% ( 2) 00:09:51.659 9.197 - 9.244: 98.6991% ( 3) 00:09:51.659 9.292 - 9.339: 98.7065% ( 1) 00:09:51.659 9.434 - 9.481: 98.7139% ( 1) 00:09:51.659 9.529 - 9.576: 98.7213% ( 1) 00:09:51.659 9.576 - 9.624: 98.7287% ( 1) 00:09:51.659 9.671 - 9.719: 98.7360% ( 1) 00:09:51.659 10.050 - 10.098: 98.7434% ( 1) 00:09:51.659 10.667 - 10.714: 98.7508% ( 1) 00:09:51.659 10.714 - 10.761: 98.7582% ( 1) 00:09:51.659 10.904 - 10.951: 98.7656% ( 1) 00:09:51.659 10.999 - 11.046: 98.7730% ( 1) 00:09:51.659 11.093 - 11.141: 98.7804% ( 1) 00:09:51.659 11.141 - 11.188: 98.7878% ( 1) 00:09:51.659 11.188 - 11.236: 98.7952% ( 1) 00:09:51.659 11.520 - 11.567: 98.8026% ( 1) 00:09:51.659 11.852 - 11.899: 98.8100% ( 1) 00:09:51.659 12.089 - 12.136: 98.8174% ( 1) 00:09:51.659 12.231 - 12.326: 98.8247% ( 1) 00:09:51.659 12.705 - 12.800: 98.8321% ( 1) 00:09:51.659 13.179 - 13.274: 98.8395% ( 1) 00:09:51.659 13.464 - 13.559: 98.8469% ( 1) 00:09:51.659 13.559 - 13.653: 98.8543% ( 1) 00:09:51.659 13.748 - 13.843: 98.8617% ( 1) 00:09:51.659 14.127 - 14.222: 98.8691% ( 1) 00:09:51.659 14.601 - 14.696: 98.8765% ( 1) 00:09:51.659 17.161 - 17.256: 98.8913% ( 2) 00:09:51.659 17.256 - 17.351: 98.8987% ( 1) 00:09:51.659 17.351 - 17.446: 98.9134% ( 2) 00:09:51.659 17.446 - 17.541: 98.9430% ( 4) 00:09:51.659 17.541 - 17.636: 98.9652% ( 3) 00:09:51.659 17.636 - 17.730: 99.0169% ( 7) 00:09:51.659 17.730 - 17.825: 99.0908% ( 10) 00:09:51.659 17.825 - 17.920: 99.1500% ( 8) 00:09:51.659 17.920 - 18.015: 99.2239% ( 10) 00:09:51.659 18.015 - 18.110: 99.2608% ( 5) 00:09:51.659 18.110 - 18.204: 99.3126% ( 7) 00:09:51.659 18.204 - 18.299: 99.3643% ( 7) 00:09:51.659 18.299 - 18.394: 99.4382% ( 10) 00:09:51.659 18.394 - 18.489: 99.5122% ( 10) 00:09:51.659 18.489 - 18.584: 99.5713% ( 8) 00:09:51.659 18.584 - 18.679: 99.6230% ( 7) 00:09:51.659 18.679 - 18.773: 99.6526% ( 4) 00:09:51.659 18.773 - 18.868: 99.7191% ( 9) 00:09:51.659 18.868 - 18.963: 99.7339% ( 2) 00:09:51.659 18.963 - 19.058: 99.7635% ( 4) 00:09:51.659 19.058 - 19.153: 99.7856% ( 3) 00:09:51.659 19.153 - 19.247: 99.8004% ( 2) 00:09:51.659 19.247 - 19.342: 99.8078% ( 1) 00:09:51.659 19.627 - 19.721: 99.8152% ( 1) 00:09:51.659 19.816 - 19.911: 99.8226% ( 1) 00:09:51.659 20.006 - 20.101: 99.8300% ( 1) 00:09:51.659 20.101 - 20.196: 99.8374% ( 1) 00:09:51.659 20.196 - 20.290: 99.8448% ( 1) 00:09:51.659 20.480 - 20.575: 99.8522% ( 1) 00:09:51.659 21.239 - 21.333: 99.8596% ( 1) 00:09:51.659 21.713 - 21.807: 99.8670% ( 1) 00:09:51.659 22.187 - 22.281: 99.8743% ( 1) 00:09:51.659 22.661 - 22.756: 99.8817% ( 1) 00:09:51.659 3980.705 - 4004.978: 99.9778% ( 13) 00:09:51.659 4004.978 - 4029.250: 100.0000% ( 3) 00:09:51.659 00:09:51.659 Complete histogram 00:09:51.659 ================== 00:09:51.659 Range in us Cumulative Count 00:09:51.659 2.062 - 2.074: 10.2003% ( 1380) 00:09:51.659 2.074 - 2.086: 39.7886% ( 4003) 00:09:51.659 2.086 - 2.098: 42.1982% ( 326) 00:09:51.659 2.098 - 2.110: 51.7777% ( 1296) 00:09:51.659 2.110 - 2.121: 58.3339% ( 887) 00:09:51.659 2.121 - 2.133: 59.7014% ( 185) 00:09:51.659 2.133 - 2.145: 67.8395% ( 1101) 00:09:51.659 2.145 - 2.157: 73.6492% ( 786) 00:09:51.659 2.157 - 2.169: 74.4918% ( 114) 00:09:51.659 2.169 - 2.181: 78.0693% ( 484) 00:09:51.659 2.181 - 2.193: 79.9763% ( 258) 00:09:51.659 2.193 - 2.204: 80.5381% ( 76) 00:09:51.659 2.204 - 2.216: 84.1008% ( 482) 00:09:51.659 2.216 - 2.228: 87.7301% ( 491) 00:09:51.659 2.228 - 2.240: 89.6297% ( 257) 00:09:51.659 2.240 - 2.252: 91.9211% ( 310) 00:09:51.659 2.252 - 2.264: 93.1037% ( 160) 00:09:51.659 2.264 - 2.276: 93.3772% ( 37) 00:09:51.659 2.276 - 2.287: 93.8872% ( 69) 00:09:51.659 2.287 - 2.299: 94.3307% ( 60) 00:09:51.659 2.299 - 2.311: 95.0477% ( 97) 00:09:51.659 2.311 - 2.323: 95.3507% ( 41) 00:09:51.659 2.323 - 2.335: 95.4616% ( 15) 00:09:51.659 2.335 - 2.347: 95.5429% ( 11) 00:09:51.659 2.347 - 2.359: 95.6464% ( 14) 00:09:51.659 2.359 - 2.370: 95.8977% ( 34) 00:09:51.659 2.370 - 2.382: 96.2821% ( 52) 00:09:51.659 2.382 - 2.394: 96.8512% ( 77) 00:09:51.659 2.394 - 2.406: 97.1321% ( 38) 00:09:51.659 2.406 - 2.418: 97.2725% ( 19) 00:09:51.659 2.418 - 2.430: 97.4573% ( 25) 00:09:51.659 2.430 - 2.441: 97.5682% ( 15) 00:09:51.659 2.441 - 2.453: 97.6125% ( 6) 00:09:51.659 2.453 - 2.465: 97.7086% ( 13) 00:09:51.659 2.465 - 2.477: 97.8638% ( 21) 00:09:51.659 2.477 - 2.489: 97.9452% ( 11) 00:09:51.659 2.489 - 2.501: 98.0339% ( 12) 00:09:51.659 2.501 - 2.513: 98.1373% ( 14) 00:09:51.659 2.513 - 2.524: 98.1743% ( 5) 00:09:51.659 2.524 - 2.536: 98.2039% ( 4) 00:09:51.659 2.536 - 2.548: 98.2260% ( 3) 00:09:51.659 2.548 - 2.560: 98.2408% ( 2) 00:09:51.659 2.560 - 2.572: 98.2556% ( 2) 00:09:51.659 2.572 - 2.584: 98.2704% ( 2) 00:09:51.659 2.584 - 2.596: 98.2852% ( 2) 00:09:51.659 2.596 - 2.607: 98.2926% ( 1) 00:09:51.659 2.631 - 2.643: 98.2999% ( 1) 00:09:51.659 2.643 - 2.655: 98.3147% ( 2) 00:09:51.659 2.655 - 2.667: 98.3221% ( 1) 00:09:51.659 2.667 - 2.679: 98.3295% ( 1) 00:09:51.659 2.679 - 2.690: 98.3369% ( 1) 00:09:51.659 2.690 - 2.702: 98.3443% ( 1) 00:09:51.659 2.714 - 2.726: 98.3517% ( 1) 00:09:51.659 2.726 - 2.738: 98.3739% ( 3) 00:09:51.659 2.761 - 2.773: 98.3813% ( 1) 00:09:51.659 2.773 - 2.785: 98.3886% ( 1) 00:09:51.659 2.785 - 2.797: 98.3960% ( 1) 00:09:51.659 2.844 - 2.856: 98.4034% ( 1) 00:09:51.659 2.856 - 2.868: 98.4108% ( 1) 00:09:51.659 2.904 - 2.916: 98.4182% ( 1) 00:09:51.659 2.916 - 2.927: 98.4256% ( 1) 00:09:51.659 3.200 - 3.224: 98.4330% ( 1) 00:09:51.659 3.247 - 3.271: 98.4404% ( 1) 00:09:51.659 3.319 - 3.342: 98.4478% ( 1) 00:09:51.659 3.366 - 3.390: 98.4552% ( 1) 00:09:51.659 3.390 - 3.413: 98.4773% ( 3) 00:09:51.659 3.413 - 3.437: 98.4847% ( 1) 00:09:51.659 3.461 - 3.484: 98.4921% ( 1) 00:09:51.659 3.532 - 3.556: 98.5069% ( 2) 00:09:51.659 3.556 - 3.579: 98.5217% ( 2) 00:09:51.659 3.579 - 3.603: 98.5291% ( 1) 00:09:51.659 3.603 - 3.627: 98.5365% ( 1) 00:09:51.659 3.650 - 3.674: 9[2024-07-15 17:32:46.746442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:51.918 8.5513% ( 2) 00:09:51.918 3.674 - 3.698: 98.5587% ( 1) 00:09:51.918 3.721 - 3.745: 98.5660% ( 1) 00:09:51.918 3.793 - 3.816: 98.5734% ( 1) 00:09:51.918 3.816 - 3.840: 98.5808% ( 1) 00:09:51.918 3.959 - 3.982: 98.5882% ( 1) 00:09:51.918 4.006 - 4.030: 98.5956% ( 1) 00:09:51.918 4.148 - 4.172: 98.6030% ( 1) 00:09:51.918 4.504 - 4.527: 98.6104% ( 1) 00:09:51.918 5.049 - 5.073: 98.6178% ( 1) 00:09:51.918 5.215 - 5.239: 98.6252% ( 1) 00:09:51.918 5.428 - 5.452: 98.6326% ( 1) 00:09:51.918 5.594 - 5.618: 98.6400% ( 1) 00:09:51.918 5.618 - 5.641: 98.6621% ( 3) 00:09:51.918 5.641 - 5.665: 98.6769% ( 2) 00:09:51.918 5.713 - 5.736: 98.6843% ( 1) 00:09:51.918 5.879 - 5.902: 98.6917% ( 1) 00:09:51.918 5.902 - 5.926: 98.6991% ( 1) 00:09:51.918 5.950 - 5.973: 98.7065% ( 1) 00:09:51.918 5.973 - 5.997: 98.7139% ( 1) 00:09:51.918 6.116 - 6.163: 98.7213% ( 1) 00:09:51.918 6.163 - 6.210: 98.7287% ( 1) 00:09:51.918 6.353 - 6.400: 98.7360% ( 1) 00:09:51.918 6.447 - 6.495: 98.7434% ( 1) 00:09:51.918 6.495 - 6.542: 98.7508% ( 1) 00:09:51.918 6.542 - 6.590: 98.7656% ( 2) 00:09:51.918 6.827 - 6.874: 98.7730% ( 1) 00:09:51.918 7.016 - 7.064: 98.7804% ( 1) 00:09:51.918 7.253 - 7.301: 98.7952% ( 2) 00:09:51.918 7.585 - 7.633: 98.8026% ( 1) 00:09:51.918 7.870 - 7.917: 98.8100% ( 1) 00:09:51.918 9.292 - 9.339: 98.8174% ( 1) 00:09:51.918 12.041 - 12.089: 98.8247% ( 1) 00:09:51.918 15.644 - 15.739: 98.8395% ( 2) 00:09:51.918 15.739 - 15.834: 98.8691% ( 4) 00:09:51.918 15.834 - 15.929: 98.8913% ( 3) 00:09:51.918 15.929 - 16.024: 98.9134% ( 3) 00:09:51.918 16.024 - 16.119: 98.9356% ( 3) 00:09:51.918 16.119 - 16.213: 98.9726% ( 5) 00:09:51.918 16.213 - 16.308: 99.0169% ( 6) 00:09:51.918 16.308 - 16.403: 99.0465% ( 4) 00:09:51.918 16.403 - 16.498: 99.1204% ( 10) 00:09:51.918 16.498 - 16.593: 99.1426% ( 3) 00:09:51.918 16.593 - 16.687: 99.2091% ( 9) 00:09:51.918 16.687 - 16.782: 99.2535% ( 6) 00:09:51.918 16.782 - 16.877: 99.2978% ( 6) 00:09:51.918 16.877 - 16.972: 99.3348% ( 5) 00:09:51.918 17.067 - 17.161: 99.3422% ( 1) 00:09:51.918 17.161 - 17.256: 99.3569% ( 2) 00:09:51.918 17.256 - 17.351: 99.3717% ( 2) 00:09:51.918 17.351 - 17.446: 99.3791% ( 1) 00:09:51.918 17.446 - 17.541: 99.3865% ( 1) 00:09:51.918 17.730 - 17.825: 99.3939% ( 1) 00:09:51.918 17.825 - 17.920: 99.4013% ( 1) 00:09:51.918 17.920 - 18.015: 99.4087% ( 1) 00:09:51.918 18.015 - 18.110: 99.4161% ( 1) 00:09:51.918 44.753 - 44.942: 99.4235% ( 1) 00:09:51.918 3980.705 - 4004.978: 99.9409% ( 70) 00:09:51.918 4004.978 - 4029.250: 100.0000% ( 8) 00:09:51.918 00:09:51.918 17:32:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:51.918 17:32:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:51.918 17:32:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:51.918 17:32:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:51.918 17:32:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:52.177 [ 00:09:52.177 { 00:09:52.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:52.177 "subtype": "Discovery", 00:09:52.177 "listen_addresses": [], 00:09:52.177 "allow_any_host": true, 00:09:52.177 "hosts": [] 00:09:52.177 }, 00:09:52.177 { 00:09:52.177 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:52.177 "subtype": "NVMe", 00:09:52.177 "listen_addresses": [ 00:09:52.177 { 00:09:52.177 "trtype": "VFIOUSER", 00:09:52.177 "adrfam": "IPv4", 00:09:52.177 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:52.177 "trsvcid": "0" 00:09:52.177 } 00:09:52.177 ], 00:09:52.177 "allow_any_host": true, 00:09:52.177 "hosts": [], 00:09:52.177 "serial_number": "SPDK1", 00:09:52.177 "model_number": "SPDK bdev Controller", 00:09:52.177 "max_namespaces": 32, 00:09:52.177 "min_cntlid": 1, 00:09:52.177 "max_cntlid": 65519, 00:09:52.177 "namespaces": [ 00:09:52.177 { 00:09:52.177 "nsid": 1, 00:09:52.177 "bdev_name": "Malloc1", 00:09:52.177 "name": "Malloc1", 00:09:52.177 "nguid": "F33C38A63F3D4A4E94CEF8A12CE4027A", 00:09:52.177 "uuid": "f33c38a6-3f3d-4a4e-94ce-f8a12ce4027a" 00:09:52.177 } 00:09:52.177 ] 00:09:52.177 }, 00:09:52.177 { 00:09:52.177 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:52.177 "subtype": "NVMe", 00:09:52.177 "listen_addresses": [ 00:09:52.177 { 00:09:52.177 "trtype": "VFIOUSER", 00:09:52.177 "adrfam": "IPv4", 00:09:52.177 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:52.177 "trsvcid": "0" 00:09:52.177 } 00:09:52.177 ], 00:09:52.177 "allow_any_host": true, 00:09:52.177 "hosts": [], 00:09:52.177 "serial_number": "SPDK2", 00:09:52.177 "model_number": "SPDK bdev Controller", 00:09:52.177 "max_namespaces": 32, 00:09:52.177 "min_cntlid": 1, 00:09:52.177 "max_cntlid": 65519, 00:09:52.177 "namespaces": [ 00:09:52.177 { 00:09:52.177 "nsid": 1, 00:09:52.177 "bdev_name": "Malloc2", 00:09:52.177 "name": "Malloc2", 00:09:52.177 "nguid": "F5126E9E58D84AA8A3045C8F0B38953E", 00:09:52.177 "uuid": "f5126e9e-58d8-4aa8-a304-5c8f0b38953e" 00:09:52.177 } 00:09:52.177 ] 00:09:52.177 } 00:09:52.177 ] 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2177688 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:52.177 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:52.177 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.177 [2024-07-15 17:32:47.258333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.436 Malloc3 00:09:52.436 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:52.694 [2024-07-15 17:32:47.612868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.694 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:52.694 Asynchronous Event Request test 00:09:52.694 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.694 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.694 Registering asynchronous event callbacks... 00:09:52.694 Starting namespace attribute notice tests for all controllers... 00:09:52.694 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:52.694 aer_cb - Changed Namespace 00:09:52.694 Cleaning up... 00:09:52.956 [ 00:09:52.956 { 00:09:52.956 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:52.956 "subtype": "Discovery", 00:09:52.956 "listen_addresses": [], 00:09:52.956 "allow_any_host": true, 00:09:52.956 "hosts": [] 00:09:52.956 }, 00:09:52.956 { 00:09:52.956 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:52.956 "subtype": "NVMe", 00:09:52.956 "listen_addresses": [ 00:09:52.956 { 00:09:52.956 "trtype": "VFIOUSER", 00:09:52.956 "adrfam": "IPv4", 00:09:52.956 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:52.956 "trsvcid": "0" 00:09:52.956 } 00:09:52.956 ], 00:09:52.956 "allow_any_host": true, 00:09:52.956 "hosts": [], 00:09:52.956 "serial_number": "SPDK1", 00:09:52.956 "model_number": "SPDK bdev Controller", 00:09:52.956 "max_namespaces": 32, 00:09:52.956 "min_cntlid": 1, 00:09:52.956 "max_cntlid": 65519, 00:09:52.956 "namespaces": [ 00:09:52.956 { 00:09:52.956 "nsid": 1, 00:09:52.956 "bdev_name": "Malloc1", 00:09:52.956 "name": "Malloc1", 00:09:52.956 "nguid": "F33C38A63F3D4A4E94CEF8A12CE4027A", 00:09:52.956 "uuid": "f33c38a6-3f3d-4a4e-94ce-f8a12ce4027a" 00:09:52.956 }, 00:09:52.956 { 00:09:52.956 "nsid": 2, 00:09:52.956 "bdev_name": "Malloc3", 00:09:52.956 "name": "Malloc3", 00:09:52.956 "nguid": "93D1471B486B424B94129F74B0410FA5", 00:09:52.956 "uuid": "93d1471b-486b-424b-9412-9f74b0410fa5" 00:09:52.956 } 00:09:52.956 ] 00:09:52.956 }, 00:09:52.956 { 00:09:52.956 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:52.956 "subtype": "NVMe", 00:09:52.956 "listen_addresses": [ 00:09:52.956 { 00:09:52.956 "trtype": "VFIOUSER", 00:09:52.956 "adrfam": "IPv4", 00:09:52.956 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:52.956 "trsvcid": "0" 00:09:52.956 } 00:09:52.956 ], 00:09:52.956 "allow_any_host": true, 00:09:52.956 "hosts": [], 00:09:52.956 "serial_number": "SPDK2", 00:09:52.956 "model_number": "SPDK bdev Controller", 00:09:52.956 "max_namespaces": 32, 00:09:52.956 "min_cntlid": 1, 00:09:52.956 "max_cntlid": 65519, 00:09:52.956 "namespaces": [ 00:09:52.956 { 00:09:52.956 "nsid": 1, 00:09:52.956 "bdev_name": "Malloc2", 00:09:52.956 "name": "Malloc2", 00:09:52.956 "nguid": "F5126E9E58D84AA8A3045C8F0B38953E", 00:09:52.956 "uuid": "f5126e9e-58d8-4aa8-a304-5c8f0b38953e" 00:09:52.956 } 00:09:52.956 ] 00:09:52.956 } 00:09:52.956 ] 00:09:52.956 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2177688 00:09:52.956 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.956 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:52.956 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:52.956 17:32:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.956 [2024-07-15 17:32:47.898347] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:09:52.956 [2024-07-15 17:32:47.898391] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177823 ] 00:09:52.956 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.956 [2024-07-15 17:32:47.932843] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:52.956 [2024-07-15 17:32:47.941204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.956 [2024-07-15 17:32:47.941250] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb7bed59000 00:09:52.956 [2024-07-15 17:32:47.942186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.956 [2024-07-15 17:32:47.943193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.956 [2024-07-15 17:32:47.944203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.956 [2024-07-15 17:32:47.945209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.956 [2024-07-15 17:32:47.946212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.956 [2024-07-15 17:32:47.947235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.957 [2024-07-15 17:32:47.948227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.957 [2024-07-15 17:32:47.949233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.957 [2024-07-15 17:32:47.950238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.957 [2024-07-15 17:32:47.950260] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb7bed4e000 00:09:52.957 [2024-07-15 17:32:47.951383] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.957 [2024-07-15 17:32:47.964582] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:52.957 [2024-07-15 17:32:47.964623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:52.957 [2024-07-15 17:32:47.972750] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:52.957 [2024-07-15 17:32:47.972802] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:52.957 [2024-07-15 17:32:47.972908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:52.957 [2024-07-15 17:32:47.972934] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:52.957 [2024-07-15 17:32:47.972949] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:52.957 [2024-07-15 17:32:47.973757] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:52.957 [2024-07-15 17:32:47.973777] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:52.957 [2024-07-15 17:32:47.973789] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:52.957 [2024-07-15 17:32:47.974764] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:52.957 [2024-07-15 17:32:47.974784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:52.957 [2024-07-15 17:32:47.974798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.975776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:52.957 [2024-07-15 17:32:47.975797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.976783] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:52.957 [2024-07-15 17:32:47.976804] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:52.957 [2024-07-15 17:32:47.976813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.976825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.976944] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:52.957 [2024-07-15 17:32:47.976953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.976962] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:52.957 [2024-07-15 17:32:47.977798] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:52.957 [2024-07-15 17:32:47.978799] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:52.957 [2024-07-15 17:32:47.979812] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:52.957 [2024-07-15 17:32:47.980807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:52.957 [2024-07-15 17:32:47.980890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:52.957 [2024-07-15 17:32:47.981836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:52.957 [2024-07-15 17:32:47.981870] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:52.957 [2024-07-15 17:32:47.981886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.981910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:52.957 [2024-07-15 17:32:47.981928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.981948] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.957 [2024-07-15 17:32:47.981958] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.957 [2024-07-15 17:32:47.981976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.957 [2024-07-15 17:32:47.985893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:52.957 [2024-07-15 17:32:47.985915] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:52.957 [2024-07-15 17:32:47.985928] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:52.957 [2024-07-15 17:32:47.985936] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:52.957 [2024-07-15 17:32:47.985944] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:52.957 [2024-07-15 17:32:47.985952] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:52.957 [2024-07-15 17:32:47.985960] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:52.957 [2024-07-15 17:32:47.985968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.985981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.985997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:52.957 [2024-07-15 17:32:47.993886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:52.957 [2024-07-15 17:32:47.993913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.957 [2024-07-15 17:32:47.993932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.957 [2024-07-15 17:32:47.993944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.957 [2024-07-15 17:32:47.993956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.957 [2024-07-15 17:32:47.993964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.993980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:47.993995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:52.957 [2024-07-15 17:32:48.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:52.957 [2024-07-15 17:32:48.001908] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:52.957 [2024-07-15 17:32:48.001917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:48.001928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:48.001939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:52.957 [2024-07-15 17:32:48.001953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.957 [2024-07-15 17:32:48.009885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:52.957 [2024-07-15 17:32:48.009956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.009972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.009986] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:52.958 [2024-07-15 17:32:48.009994] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:52.958 [2024-07-15 17:32:48.010004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.017889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.017913] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:52.958 [2024-07-15 17:32:48.017930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.017945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.017958] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.958 [2024-07-15 17:32:48.017966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.958 [2024-07-15 17:32:48.017976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.025885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.025916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.025933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.025947] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.958 [2024-07-15 17:32:48.025955] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.958 [2024-07-15 17:32:48.025965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.033903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.033933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.033947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.033962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.033974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.033982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.033991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.034000] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:52.958 [2024-07-15 17:32:48.034007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:52.958 [2024-07-15 17:32:48.034016] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:52.958 [2024-07-15 17:32:48.034041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.041913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.049888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.049913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.057888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.057928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.065888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.065921] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:52.958 [2024-07-15 17:32:48.065932] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:52.958 [2024-07-15 17:32:48.065942] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:52.958 [2024-07-15 17:32:48.065948] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:52.958 [2024-07-15 17:32:48.065958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:52.958 [2024-07-15 17:32:48.065970] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:52.958 [2024-07-15 17:32:48.065979] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:52.958 [2024-07-15 17:32:48.065988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.065999] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:52.958 [2024-07-15 17:32:48.066007] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.958 [2024-07-15 17:32:48.066016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.066028] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:52.958 [2024-07-15 17:32:48.066035] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:52.958 [2024-07-15 17:32:48.066044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:52.958 [2024-07-15 17:32:48.073890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.073917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.073935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:52.958 [2024-07-15 17:32:48.073948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:52.958 ===================================================== 00:09:52.958 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:52.958 ===================================================== 00:09:52.958 Controller Capabilities/Features 00:09:52.958 ================================ 00:09:52.958 Vendor ID: 4e58 00:09:52.958 Subsystem Vendor ID: 4e58 00:09:52.958 Serial Number: SPDK2 00:09:52.958 Model Number: SPDK bdev Controller 00:09:52.958 Firmware Version: 24.09 00:09:52.958 Recommended Arb Burst: 6 00:09:52.958 IEEE OUI Identifier: 8d 6b 50 00:09:52.958 Multi-path I/O 00:09:52.958 May have multiple subsystem ports: Yes 00:09:52.958 May have multiple controllers: Yes 00:09:52.958 Associated with SR-IOV VF: No 00:09:52.958 Max Data Transfer Size: 131072 00:09:52.958 Max Number of Namespaces: 32 00:09:52.958 Max Number of I/O Queues: 127 00:09:52.958 NVMe Specification Version (VS): 1.3 00:09:52.958 NVMe Specification Version (Identify): 1.3 00:09:52.958 Maximum Queue Entries: 256 00:09:52.958 Contiguous Queues Required: Yes 00:09:52.958 Arbitration Mechanisms Supported 00:09:52.958 Weighted Round Robin: Not Supported 00:09:52.958 Vendor Specific: Not Supported 00:09:52.958 Reset Timeout: 15000 ms 00:09:52.958 Doorbell Stride: 4 bytes 00:09:52.958 NVM Subsystem Reset: Not Supported 00:09:52.958 Command Sets Supported 00:09:52.958 NVM Command Set: Supported 00:09:52.958 Boot Partition: Not Supported 00:09:52.958 Memory Page Size Minimum: 4096 bytes 00:09:52.958 Memory Page Size Maximum: 4096 bytes 00:09:52.958 Persistent Memory Region: Not Supported 00:09:52.958 Optional Asynchronous Events Supported 00:09:52.958 Namespace Attribute Notices: Supported 00:09:52.959 Firmware Activation Notices: Not Supported 00:09:52.959 ANA Change Notices: Not Supported 00:09:52.959 PLE Aggregate Log Change Notices: Not Supported 00:09:52.959 LBA Status Info Alert Notices: Not Supported 00:09:52.959 EGE Aggregate Log Change Notices: Not Supported 00:09:52.959 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.959 Zone Descriptor Change Notices: Not Supported 00:09:52.959 Discovery Log Change Notices: Not Supported 00:09:52.959 Controller Attributes 00:09:52.959 128-bit Host Identifier: Supported 00:09:52.959 Non-Operational Permissive Mode: Not Supported 00:09:52.959 NVM Sets: Not Supported 00:09:52.959 Read Recovery Levels: Not Supported 00:09:52.959 Endurance Groups: Not Supported 00:09:52.959 Predictable Latency Mode: Not Supported 00:09:52.959 Traffic Based Keep ALive: Not Supported 00:09:52.959 Namespace Granularity: Not Supported 00:09:52.959 SQ Associations: Not Supported 00:09:52.959 UUID List: Not Supported 00:09:52.959 Multi-Domain Subsystem: Not Supported 00:09:52.959 Fixed Capacity Management: Not Supported 00:09:52.959 Variable Capacity Management: Not Supported 00:09:52.959 Delete Endurance Group: Not Supported 00:09:52.959 Delete NVM Set: Not Supported 00:09:52.959 Extended LBA Formats Supported: Not Supported 00:09:52.959 Flexible Data Placement Supported: Not Supported 00:09:52.959 00:09:52.959 Controller Memory Buffer Support 00:09:52.959 ================================ 00:09:52.959 Supported: No 00:09:52.959 00:09:52.959 Persistent Memory Region Support 00:09:52.959 ================================ 00:09:52.959 Supported: No 00:09:52.959 00:09:52.959 Admin Command Set Attributes 00:09:52.959 ============================ 00:09:52.959 Security Send/Receive: Not Supported 00:09:52.959 Format NVM: Not Supported 00:09:52.959 Firmware Activate/Download: Not Supported 00:09:52.959 Namespace Management: Not Supported 00:09:52.959 Device Self-Test: Not Supported 00:09:52.959 Directives: Not Supported 00:09:52.959 NVMe-MI: Not Supported 00:09:52.959 Virtualization Management: Not Supported 00:09:52.959 Doorbell Buffer Config: Not Supported 00:09:52.959 Get LBA Status Capability: Not Supported 00:09:52.959 Command & Feature Lockdown Capability: Not Supported 00:09:52.959 Abort Command Limit: 4 00:09:52.959 Async Event Request Limit: 4 00:09:52.959 Number of Firmware Slots: N/A 00:09:52.959 Firmware Slot 1 Read-Only: N/A 00:09:52.959 Firmware Activation Without Reset: N/A 00:09:52.959 Multiple Update Detection Support: N/A 00:09:52.959 Firmware Update Granularity: No Information Provided 00:09:52.959 Per-Namespace SMART Log: No 00:09:52.959 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.959 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:52.959 Command Effects Log Page: Supported 00:09:52.959 Get Log Page Extended Data: Supported 00:09:52.959 Telemetry Log Pages: Not Supported 00:09:52.959 Persistent Event Log Pages: Not Supported 00:09:52.959 Supported Log Pages Log Page: May Support 00:09:52.959 Commands Supported & Effects Log Page: Not Supported 00:09:52.959 Feature Identifiers & Effects Log Page:May Support 00:09:52.959 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.959 Data Area 4 for Telemetry Log: Not Supported 00:09:52.959 Error Log Page Entries Supported: 128 00:09:52.959 Keep Alive: Supported 00:09:52.959 Keep Alive Granularity: 10000 ms 00:09:52.959 00:09:52.959 NVM Command Set Attributes 00:09:52.959 ========================== 00:09:52.959 Submission Queue Entry Size 00:09:52.959 Max: 64 00:09:52.959 Min: 64 00:09:52.959 Completion Queue Entry Size 00:09:52.959 Max: 16 00:09:52.959 Min: 16 00:09:52.959 Number of Namespaces: 32 00:09:52.959 Compare Command: Supported 00:09:52.959 Write Uncorrectable Command: Not Supported 00:09:52.959 Dataset Management Command: Supported 00:09:52.959 Write Zeroes Command: Supported 00:09:52.959 Set Features Save Field: Not Supported 00:09:52.959 Reservations: Not Supported 00:09:52.959 Timestamp: Not Supported 00:09:52.959 Copy: Supported 00:09:52.959 Volatile Write Cache: Present 00:09:52.959 Atomic Write Unit (Normal): 1 00:09:52.959 Atomic Write Unit (PFail): 1 00:09:52.959 Atomic Compare & Write Unit: 1 00:09:52.959 Fused Compare & Write: Supported 00:09:52.959 Scatter-Gather List 00:09:52.959 SGL Command Set: Supported (Dword aligned) 00:09:52.959 SGL Keyed: Not Supported 00:09:52.959 SGL Bit Bucket Descriptor: Not Supported 00:09:52.959 SGL Metadata Pointer: Not Supported 00:09:52.959 Oversized SGL: Not Supported 00:09:52.959 SGL Metadata Address: Not Supported 00:09:52.959 SGL Offset: Not Supported 00:09:52.959 Transport SGL Data Block: Not Supported 00:09:52.959 Replay Protected Memory Block: Not Supported 00:09:52.959 00:09:52.959 Firmware Slot Information 00:09:52.959 ========================= 00:09:52.959 Active slot: 1 00:09:52.959 Slot 1 Firmware Revision: 24.09 00:09:52.959 00:09:52.959 00:09:52.959 Commands Supported and Effects 00:09:52.959 ============================== 00:09:52.959 Admin Commands 00:09:52.959 -------------- 00:09:52.959 Get Log Page (02h): Supported 00:09:52.959 Identify (06h): Supported 00:09:52.959 Abort (08h): Supported 00:09:52.959 Set Features (09h): Supported 00:09:52.959 Get Features (0Ah): Supported 00:09:52.959 Asynchronous Event Request (0Ch): Supported 00:09:52.959 Keep Alive (18h): Supported 00:09:52.959 I/O Commands 00:09:52.959 ------------ 00:09:52.959 Flush (00h): Supported LBA-Change 00:09:52.959 Write (01h): Supported LBA-Change 00:09:52.959 Read (02h): Supported 00:09:52.959 Compare (05h): Supported 00:09:52.959 Write Zeroes (08h): Supported LBA-Change 00:09:52.959 Dataset Management (09h): Supported LBA-Change 00:09:52.959 Copy (19h): Supported LBA-Change 00:09:52.959 00:09:52.959 Error Log 00:09:52.959 ========= 00:09:52.959 00:09:52.959 Arbitration 00:09:52.959 =========== 00:09:52.959 Arbitration Burst: 1 00:09:52.959 00:09:52.959 Power Management 00:09:52.959 ================ 00:09:52.959 Number of Power States: 1 00:09:52.959 Current Power State: Power State #0 00:09:52.959 Power State #0: 00:09:52.959 Max Power: 0.00 W 00:09:52.959 Non-Operational State: Operational 00:09:52.959 Entry Latency: Not Reported 00:09:52.959 Exit Latency: Not Reported 00:09:52.959 Relative Read Throughput: 0 00:09:52.959 Relative Read Latency: 0 00:09:52.959 Relative Write Throughput: 0 00:09:52.959 Relative Write Latency: 0 00:09:52.959 Idle Power: Not Reported 00:09:52.959 Active Power: Not Reported 00:09:52.959 Non-Operational Permissive Mode: Not Supported 00:09:52.959 00:09:52.959 Health Information 00:09:52.959 ================== 00:09:52.959 Critical Warnings: 00:09:52.960 Available Spare Space: OK 00:09:52.960 Temperature: OK 00:09:52.960 Device Reliability: OK 00:09:52.960 Read Only: No 00:09:52.960 Volatile Memory Backup: OK 00:09:52.960 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:52.960 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:52.960 Available Spare: 0% 00:09:52.960 Available Sp[2024-07-15 17:32:48.074069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:52.960 [2024-07-15 17:32:48.081893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:52.960 [2024-07-15 17:32:48.081951] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:52.960 [2024-07-15 17:32:48.081970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.960 [2024-07-15 17:32:48.081981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.960 [2024-07-15 17:32:48.081992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.960 [2024-07-15 17:32:48.082002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.960 [2024-07-15 17:32:48.082105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:52.960 [2024-07-15 17:32:48.082128] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:52.960 [2024-07-15 17:32:48.083102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:52.960 [2024-07-15 17:32:48.083172] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:52.960 [2024-07-15 17:32:48.083187] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:52.960 [2024-07-15 17:32:48.084127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:52.960 [2024-07-15 17:32:48.084153] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:52.960 [2024-07-15 17:32:48.084208] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:52.960 [2024-07-15 17:32:48.086888] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:53.221 are Threshold: 0% 00:09:53.221 Life Percentage Used: 0% 00:09:53.221 Data Units Read: 0 00:09:53.221 Data Units Written: 0 00:09:53.221 Host Read Commands: 0 00:09:53.221 Host Write Commands: 0 00:09:53.221 Controller Busy Time: 0 minutes 00:09:53.221 Power Cycles: 0 00:09:53.221 Power On Hours: 0 hours 00:09:53.221 Unsafe Shutdowns: 0 00:09:53.221 Unrecoverable Media Errors: 0 00:09:53.221 Lifetime Error Log Entries: 0 00:09:53.221 Warning Temperature Time: 0 minutes 00:09:53.221 Critical Temperature Time: 0 minutes 00:09:53.221 00:09:53.221 Number of Queues 00:09:53.221 ================ 00:09:53.221 Number of I/O Submission Queues: 127 00:09:53.221 Number of I/O Completion Queues: 127 00:09:53.221 00:09:53.221 Active Namespaces 00:09:53.221 ================= 00:09:53.221 Namespace ID:1 00:09:53.221 Error Recovery Timeout: Unlimited 00:09:53.221 Command Set Identifier: NVM (00h) 00:09:53.221 Deallocate: Supported 00:09:53.221 Deallocated/Unwritten Error: Not Supported 00:09:53.221 Deallocated Read Value: Unknown 00:09:53.221 Deallocate in Write Zeroes: Not Supported 00:09:53.221 Deallocated Guard Field: 0xFFFF 00:09:53.221 Flush: Supported 00:09:53.221 Reservation: Supported 00:09:53.221 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.221 Size (in LBAs): 131072 (0GiB) 00:09:53.221 Capacity (in LBAs): 131072 (0GiB) 00:09:53.221 Utilization (in LBAs): 131072 (0GiB) 00:09:53.221 NGUID: F5126E9E58D84AA8A3045C8F0B38953E 00:09:53.221 UUID: f5126e9e-58d8-4aa8-a304-5c8f0b38953e 00:09:53.221 Thin Provisioning: Not Supported 00:09:53.221 Per-NS Atomic Units: Yes 00:09:53.221 Atomic Boundary Size (Normal): 0 00:09:53.221 Atomic Boundary Size (PFail): 0 00:09:53.221 Atomic Boundary Offset: 0 00:09:53.221 Maximum Single Source Range Length: 65535 00:09:53.221 Maximum Copy Length: 65535 00:09:53.221 Maximum Source Range Count: 1 00:09:53.221 NGUID/EUI64 Never Reused: No 00:09:53.221 Namespace Write Protected: No 00:09:53.221 Number of LBA Formats: 1 00:09:53.221 Current LBA Format: LBA Format #00 00:09:53.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.221 00:09:53.221 17:32:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:53.221 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.221 [2024-07-15 17:32:48.314646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:58.492 Initializing NVMe Controllers 00:09:58.492 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:58.492 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:58.492 Initialization complete. Launching workers. 00:09:58.492 ======================================================== 00:09:58.492 Latency(us) 00:09:58.492 Device Information : IOPS MiB/s Average min max 00:09:58.492 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35067.45 136.98 3649.61 1149.19 9560.18 00:09:58.492 ======================================================== 00:09:58.492 Total : 35067.45 136.98 3649.61 1149.19 9560.18 00:09:58.492 00:09:58.492 [2024-07-15 17:32:53.421283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:58.492 17:32:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.492 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.751 [2024-07-15 17:32:53.665895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:04.021 Initializing NVMe Controllers 00:10:04.021 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:04.021 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:04.021 Initialization complete. Launching workers. 00:10:04.021 ======================================================== 00:10:04.021 Latency(us) 00:10:04.021 Device Information : IOPS MiB/s Average min max 00:10:04.021 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32283.28 126.11 3964.13 1200.72 8162.50 00:10:04.021 ======================================================== 00:10:04.021 Total : 32283.28 126.11 3964.13 1200.72 8162.50 00:10:04.021 00:10:04.021 [2024-07-15 17:32:58.688847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:04.021 17:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:04.021 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.021 [2024-07-15 17:32:58.901829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:09.285 [2024-07-15 17:33:04.043006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.285 Initializing NVMe Controllers 00:10:09.285 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:09.285 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:09.285 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:09.285 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:09.285 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:09.285 Initialization complete. Launching workers. 00:10:09.285 Starting thread on core 2 00:10:09.285 Starting thread on core 3 00:10:09.285 Starting thread on core 1 00:10:09.285 17:33:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:09.285 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.285 [2024-07-15 17:33:04.345331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.581 [2024-07-15 17:33:07.427362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.581 Initializing NVMe Controllers 00:10:12.581 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.581 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:12.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:12.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:12.581 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:12.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.581 Initialization complete. Launching workers. 00:10:12.581 Starting thread on core 1 with urgent priority queue 00:10:12.581 Starting thread on core 2 with urgent priority queue 00:10:12.581 Starting thread on core 3 with urgent priority queue 00:10:12.581 Starting thread on core 0 with urgent priority queue 00:10:12.581 SPDK bdev Controller (SPDK2 ) core 0: 3925.67 IO/s 25.47 secs/100000 ios 00:10:12.581 SPDK bdev Controller (SPDK2 ) core 1: 4418.33 IO/s 22.63 secs/100000 ios 00:10:12.581 SPDK bdev Controller (SPDK2 ) core 2: 4076.00 IO/s 24.53 secs/100000 ios 00:10:12.581 SPDK bdev Controller (SPDK2 ) core 3: 4301.33 IO/s 23.25 secs/100000 ios 00:10:12.581 ======================================================== 00:10:12.581 00:10:12.581 17:33:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:12.581 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.839 [2024-07-15 17:33:07.725303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:12.839 Initializing NVMe Controllers 00:10:12.839 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.839 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:12.839 Namespace ID: 1 size: 0GB 00:10:12.839 Initialization complete. 00:10:12.839 INFO: using host memory buffer for IO 00:10:12.839 Hello world! 00:10:12.839 [2024-07-15 17:33:07.735527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:12.839 17:33:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:12.839 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.097 [2024-07-15 17:33:08.025270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.034 Initializing NVMe Controllers 00:10:14.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:14.034 Initialization complete. Launching workers. 00:10:14.034 submit (in ns) avg, min, max = 7612.8, 3505.6, 4006173.3 00:10:14.034 complete (in ns) avg, min, max = 26545.3, 2058.9, 5996254.4 00:10:14.034 00:10:14.034 Submit histogram 00:10:14.034 ================ 00:10:14.035 Range in us Cumulative Count 00:10:14.035 3.484 - 3.508: 0.0150% ( 2) 00:10:14.035 3.508 - 3.532: 0.4288% ( 55) 00:10:14.035 3.532 - 3.556: 1.6625% ( 164) 00:10:14.035 3.556 - 3.579: 4.5513% ( 384) 00:10:14.035 3.579 - 3.603: 9.6517% ( 678) 00:10:14.035 3.603 - 3.627: 18.5361% ( 1181) 00:10:14.035 3.627 - 3.650: 27.6386% ( 1210) 00:10:14.035 3.650 - 3.674: 35.3720% ( 1028) 00:10:14.035 3.674 - 3.698: 41.5106% ( 816) 00:10:14.035 3.698 - 3.721: 47.5288% ( 800) 00:10:14.035 3.721 - 3.745: 52.1703% ( 617) 00:10:14.035 3.745 - 3.769: 56.2552% ( 543) 00:10:14.035 3.769 - 3.793: 59.8360% ( 476) 00:10:14.035 3.793 - 3.816: 63.0332% ( 425) 00:10:14.035 3.816 - 3.840: 66.3357% ( 439) 00:10:14.035 3.840 - 3.864: 70.0293% ( 491) 00:10:14.035 3.864 - 3.887: 74.0916% ( 540) 00:10:14.035 3.887 - 3.911: 77.7402% ( 485) 00:10:14.035 3.911 - 3.935: 80.7794% ( 404) 00:10:14.035 3.935 - 3.959: 82.9760% ( 292) 00:10:14.035 3.959 - 3.982: 85.1275% ( 286) 00:10:14.035 3.982 - 4.006: 86.8728% ( 232) 00:10:14.035 4.006 - 4.030: 88.1592% ( 171) 00:10:14.035 4.030 - 4.053: 89.2274% ( 142) 00:10:14.035 4.053 - 4.077: 90.4010% ( 156) 00:10:14.035 4.077 - 4.101: 91.3639% ( 128) 00:10:14.035 4.101 - 4.124: 92.1237% ( 101) 00:10:14.035 4.124 - 4.148: 92.8609% ( 98) 00:10:14.035 4.148 - 4.172: 93.2972% ( 58) 00:10:14.035 4.172 - 4.196: 93.6282% ( 44) 00:10:14.035 4.196 - 4.219: 93.9592% ( 44) 00:10:14.035 4.219 - 4.243: 94.2000% ( 32) 00:10:14.035 4.243 - 4.267: 94.4031% ( 27) 00:10:14.035 4.267 - 4.290: 94.6363% ( 31) 00:10:14.035 4.290 - 4.314: 94.7792% ( 19) 00:10:14.035 4.314 - 4.338: 94.9297% ( 20) 00:10:14.035 4.338 - 4.361: 95.0801% ( 20) 00:10:14.035 4.361 - 4.385: 95.2005% ( 16) 00:10:14.035 4.385 - 4.409: 95.2908% ( 12) 00:10:14.035 4.409 - 4.433: 95.3810% ( 12) 00:10:14.035 4.433 - 4.456: 95.4713% ( 12) 00:10:14.035 4.456 - 4.480: 95.5917% ( 16) 00:10:14.035 4.480 - 4.504: 95.6669% ( 10) 00:10:14.035 4.504 - 4.527: 95.7421% ( 10) 00:10:14.035 4.527 - 4.551: 95.7948% ( 7) 00:10:14.035 4.551 - 4.575: 95.8399% ( 6) 00:10:14.035 4.575 - 4.599: 95.8926% ( 7) 00:10:14.035 4.599 - 4.622: 95.9302% ( 5) 00:10:14.035 4.622 - 4.646: 95.9678% ( 5) 00:10:14.035 4.646 - 4.670: 96.0355% ( 9) 00:10:14.035 4.670 - 4.693: 96.0656% ( 4) 00:10:14.035 4.693 - 4.717: 96.0957% ( 4) 00:10:14.035 4.717 - 4.741: 96.1860% ( 12) 00:10:14.035 4.741 - 4.764: 96.2161% ( 4) 00:10:14.035 4.764 - 4.788: 96.2537% ( 5) 00:10:14.035 4.788 - 4.812: 96.3214% ( 9) 00:10:14.035 4.812 - 4.836: 96.3740% ( 7) 00:10:14.035 4.836 - 4.859: 96.4568% ( 11) 00:10:14.035 4.859 - 4.883: 96.5471% ( 12) 00:10:14.035 4.883 - 4.907: 96.6223% ( 10) 00:10:14.035 4.907 - 4.930: 96.6674% ( 6) 00:10:14.035 4.930 - 4.954: 96.6975% ( 4) 00:10:14.035 4.954 - 4.978: 96.7953% ( 13) 00:10:14.035 4.978 - 5.001: 96.8329% ( 5) 00:10:14.035 5.001 - 5.025: 96.8931% ( 8) 00:10:14.035 5.025 - 5.049: 96.9608% ( 9) 00:10:14.035 5.049 - 5.073: 97.0135% ( 7) 00:10:14.035 5.073 - 5.096: 97.0586% ( 6) 00:10:14.035 5.096 - 5.120: 97.1263% ( 9) 00:10:14.035 5.120 - 5.144: 97.1639% ( 5) 00:10:14.035 5.144 - 5.167: 97.2015% ( 5) 00:10:14.035 5.167 - 5.191: 97.2316% ( 4) 00:10:14.035 5.191 - 5.215: 97.2617% ( 4) 00:10:14.035 5.215 - 5.239: 97.2843% ( 3) 00:10:14.035 5.239 - 5.262: 97.3069% ( 3) 00:10:14.035 5.262 - 5.286: 97.3520% ( 6) 00:10:14.035 5.286 - 5.310: 97.3595% ( 1) 00:10:14.035 5.310 - 5.333: 97.3821% ( 3) 00:10:14.035 5.333 - 5.357: 97.4046% ( 3) 00:10:14.035 5.357 - 5.381: 97.4347% ( 4) 00:10:14.035 5.381 - 5.404: 97.4573% ( 3) 00:10:14.035 5.404 - 5.428: 97.4648% ( 1) 00:10:14.035 5.428 - 5.452: 97.4949% ( 4) 00:10:14.035 5.452 - 5.476: 97.5024% ( 1) 00:10:14.035 5.476 - 5.499: 97.5175% ( 2) 00:10:14.035 5.499 - 5.523: 97.5551% ( 5) 00:10:14.035 5.523 - 5.547: 97.5852% ( 4) 00:10:14.035 5.547 - 5.570: 97.6078% ( 3) 00:10:14.035 5.570 - 5.594: 97.6153% ( 1) 00:10:14.035 5.594 - 5.618: 97.6379% ( 3) 00:10:14.035 5.618 - 5.641: 97.6679% ( 4) 00:10:14.035 5.641 - 5.665: 97.6905% ( 3) 00:10:14.035 5.665 - 5.689: 97.7056% ( 2) 00:10:14.035 5.736 - 5.760: 97.7131% ( 1) 00:10:14.035 5.807 - 5.831: 97.7281% ( 2) 00:10:14.035 5.855 - 5.879: 97.7357% ( 1) 00:10:14.035 5.879 - 5.902: 97.7582% ( 3) 00:10:14.035 5.902 - 5.926: 97.7808% ( 3) 00:10:14.035 5.926 - 5.950: 97.7883% ( 1) 00:10:14.035 5.950 - 5.973: 97.7958% ( 1) 00:10:14.035 5.973 - 5.997: 97.8259% ( 4) 00:10:14.035 6.068 - 6.116: 97.8485% ( 3) 00:10:14.035 6.116 - 6.163: 97.8560% ( 1) 00:10:14.035 6.163 - 6.210: 97.8635% ( 1) 00:10:14.035 6.258 - 6.305: 97.8711% ( 1) 00:10:14.035 6.305 - 6.353: 97.8936% ( 3) 00:10:14.035 6.353 - 6.400: 97.9162% ( 3) 00:10:14.035 6.447 - 6.495: 97.9312% ( 2) 00:10:14.035 6.495 - 6.542: 97.9388% ( 1) 00:10:14.035 6.542 - 6.590: 97.9613% ( 3) 00:10:14.035 6.590 - 6.637: 97.9839% ( 3) 00:10:14.035 6.637 - 6.684: 97.9989% ( 2) 00:10:14.035 6.684 - 6.732: 98.0065% ( 1) 00:10:14.035 6.779 - 6.827: 98.0140% ( 1) 00:10:14.035 6.874 - 6.921: 98.0215% ( 1) 00:10:14.035 6.921 - 6.969: 98.0366% ( 2) 00:10:14.035 6.969 - 7.016: 98.0441% ( 1) 00:10:14.036 7.111 - 7.159: 98.0667% ( 3) 00:10:14.036 7.159 - 7.206: 98.0817% ( 2) 00:10:14.036 7.206 - 7.253: 98.0892% ( 1) 00:10:14.036 7.301 - 7.348: 98.1118% ( 3) 00:10:14.036 7.443 - 7.490: 98.1419% ( 4) 00:10:14.036 7.538 - 7.585: 98.1720% ( 4) 00:10:14.036 7.585 - 7.633: 98.1795% ( 1) 00:10:14.036 7.633 - 7.680: 98.2021% ( 3) 00:10:14.036 7.680 - 7.727: 98.2096% ( 1) 00:10:14.036 7.727 - 7.775: 98.2246% ( 2) 00:10:14.036 7.775 - 7.822: 98.2322% ( 1) 00:10:14.036 7.822 - 7.870: 98.2397% ( 1) 00:10:14.036 7.870 - 7.917: 98.2472% ( 1) 00:10:14.036 7.917 - 7.964: 98.2698% ( 3) 00:10:14.036 8.059 - 8.107: 98.2773% ( 1) 00:10:14.036 8.107 - 8.154: 98.2923% ( 2) 00:10:14.036 8.154 - 8.201: 98.3074% ( 2) 00:10:14.036 8.201 - 8.249: 98.3224% ( 2) 00:10:14.036 8.296 - 8.344: 98.3299% ( 1) 00:10:14.036 8.391 - 8.439: 98.3375% ( 1) 00:10:14.036 8.439 - 8.486: 98.3525% ( 2) 00:10:14.036 8.486 - 8.533: 98.3676% ( 2) 00:10:14.036 8.581 - 8.628: 98.3826% ( 2) 00:10:14.036 8.770 - 8.818: 98.3901% ( 1) 00:10:14.036 8.818 - 8.865: 98.3977% ( 1) 00:10:14.036 8.865 - 8.913: 98.4202% ( 3) 00:10:14.036 8.913 - 8.960: 98.4277% ( 1) 00:10:14.036 8.960 - 9.007: 98.4503% ( 3) 00:10:14.036 9.055 - 9.102: 98.4578% ( 1) 00:10:14.036 9.102 - 9.150: 98.4654% ( 1) 00:10:14.036 9.244 - 9.292: 98.4729% ( 1) 00:10:14.036 9.292 - 9.339: 98.4879% ( 2) 00:10:14.036 9.339 - 9.387: 98.5030% ( 2) 00:10:14.036 9.387 - 9.434: 98.5105% ( 1) 00:10:14.036 9.529 - 9.576: 98.5180% ( 1) 00:10:14.036 9.576 - 9.624: 98.5255% ( 1) 00:10:14.036 9.813 - 9.861: 98.5331% ( 1) 00:10:14.036 9.861 - 9.908: 98.5406% ( 1) 00:10:14.036 9.956 - 10.003: 98.5481% ( 1) 00:10:14.036 10.003 - 10.050: 98.5556% ( 1) 00:10:14.036 10.050 - 10.098: 98.5632% ( 1) 00:10:14.036 10.098 - 10.145: 98.5782% ( 2) 00:10:14.036 10.335 - 10.382: 98.5857% ( 1) 00:10:14.036 10.382 - 10.430: 98.5932% ( 1) 00:10:14.036 10.430 - 10.477: 98.6008% ( 1) 00:10:14.036 10.524 - 10.572: 98.6083% ( 1) 00:10:14.036 10.619 - 10.667: 98.6158% ( 1) 00:10:14.036 10.667 - 10.714: 98.6384% ( 3) 00:10:14.036 10.809 - 10.856: 98.6534% ( 2) 00:10:14.036 10.856 - 10.904: 98.6685% ( 2) 00:10:14.036 10.951 - 10.999: 98.6835% ( 2) 00:10:14.036 11.046 - 11.093: 98.6910% ( 1) 00:10:14.036 11.141 - 11.188: 98.6986% ( 1) 00:10:14.036 11.283 - 11.330: 98.7136% ( 2) 00:10:14.036 11.330 - 11.378: 98.7211% ( 1) 00:10:14.036 11.378 - 11.425: 98.7287% ( 1) 00:10:14.036 11.520 - 11.567: 98.7362% ( 1) 00:10:14.036 11.662 - 11.710: 98.7437% ( 1) 00:10:14.036 11.852 - 11.899: 98.7512% ( 1) 00:10:14.036 11.994 - 12.041: 98.7587% ( 1) 00:10:14.036 12.041 - 12.089: 98.7663% ( 1) 00:10:14.036 12.136 - 12.231: 98.7738% ( 1) 00:10:14.036 12.231 - 12.326: 98.7813% ( 1) 00:10:14.036 12.326 - 12.421: 98.7888% ( 1) 00:10:14.036 12.610 - 12.705: 98.7964% ( 1) 00:10:14.036 12.705 - 12.800: 98.8039% ( 1) 00:10:14.036 12.800 - 12.895: 98.8189% ( 2) 00:10:14.036 12.990 - 13.084: 98.8265% ( 1) 00:10:14.036 13.084 - 13.179: 98.8340% ( 1) 00:10:14.036 13.179 - 13.274: 98.8415% ( 1) 00:10:14.036 13.274 - 13.369: 98.8490% ( 1) 00:10:14.036 13.369 - 13.464: 98.8641% ( 2) 00:10:14.036 13.464 - 13.559: 98.8716% ( 1) 00:10:14.036 13.653 - 13.748: 98.8791% ( 1) 00:10:14.036 13.748 - 13.843: 98.8866% ( 1) 00:10:14.036 13.938 - 14.033: 98.9017% ( 2) 00:10:14.036 14.033 - 14.127: 98.9092% ( 1) 00:10:14.036 14.222 - 14.317: 98.9167% ( 1) 00:10:14.036 14.412 - 14.507: 98.9242% ( 1) 00:10:14.036 14.507 - 14.601: 98.9318% ( 1) 00:10:14.036 14.601 - 14.696: 98.9468% ( 2) 00:10:14.036 14.696 - 14.791: 98.9619% ( 2) 00:10:14.036 14.791 - 14.886: 98.9769% ( 2) 00:10:14.036 16.024 - 16.119: 98.9844% ( 1) 00:10:14.036 17.067 - 17.161: 98.9920% ( 1) 00:10:14.036 17.161 - 17.256: 99.0070% ( 2) 00:10:14.036 17.351 - 17.446: 99.0145% ( 1) 00:10:14.036 17.446 - 17.541: 99.0371% ( 3) 00:10:14.036 17.541 - 17.636: 99.0597% ( 3) 00:10:14.036 17.636 - 17.730: 99.0897% ( 4) 00:10:14.036 17.730 - 17.825: 99.1424% ( 7) 00:10:14.036 17.825 - 17.920: 99.1725% ( 4) 00:10:14.036 17.920 - 18.015: 99.2176% ( 6) 00:10:14.036 18.015 - 18.110: 99.2703% ( 7) 00:10:14.036 18.110 - 18.204: 99.3530% ( 11) 00:10:14.036 18.204 - 18.299: 99.4358% ( 11) 00:10:14.036 18.299 - 18.394: 99.5035% ( 9) 00:10:14.036 18.394 - 18.489: 99.5562% ( 7) 00:10:14.036 18.489 - 18.584: 99.6163% ( 8) 00:10:14.036 18.584 - 18.679: 99.6464% ( 4) 00:10:14.036 18.679 - 18.773: 99.6991% ( 7) 00:10:14.036 18.773 - 18.868: 99.7367% ( 5) 00:10:14.036 18.868 - 18.963: 99.7593% ( 3) 00:10:14.036 18.963 - 19.058: 99.7668% ( 1) 00:10:14.036 19.058 - 19.153: 99.7818% ( 2) 00:10:14.036 19.153 - 19.247: 99.7894% ( 1) 00:10:14.036 19.247 - 19.342: 99.7969% ( 1) 00:10:14.036 19.342 - 19.437: 99.8044% ( 1) 00:10:14.036 19.437 - 19.532: 99.8195% ( 2) 00:10:14.036 19.721 - 19.816: 99.8270% ( 1) 00:10:14.036 20.101 - 20.196: 99.8345% ( 1) 00:10:14.036 20.196 - 20.290: 99.8495% ( 2) 00:10:14.036 20.385 - 20.480: 99.8571% ( 1) 00:10:14.036 21.997 - 22.092: 99.8646% ( 1) 00:10:14.036 22.945 - 23.040: 99.8721% ( 1) 00:10:14.036 24.273 - 24.462: 99.8796% ( 1) 00:10:14.036 25.031 - 25.221: 99.8872% ( 1) 00:10:14.036 28.444 - 28.634: 99.8947% ( 1) 00:10:14.036 29.013 - 29.203: 99.9022% ( 1) 00:10:14.036 31.479 - 31.668: 99.9097% ( 1) 00:10:14.036 3980.705 - 4004.978: 99.9774% ( 9) 00:10:14.036 4004.978 - 4029.250: 100.0000% ( 3) 00:10:14.036 00:10:14.036 Complete histogram 00:10:14.036 ================== 00:10:14.036 Range in us Cumulative Count 00:10:14.036 2.050 - 2.062: 0.1730% ( 23) 00:10:14.036 2.062 - 2.074: 21.7107% ( 2863) 00:10:14.036 2.074 - 2.086: 39.0431% ( 2304) 00:10:14.036 2.086 - 2.098: 42.0673% ( 402) 00:10:14.036 2.098 - 2.110: 50.9892% ( 1186) 00:10:14.036 2.110 - 2.121: 55.1268% ( 550) 00:10:14.036 2.121 - 2.133: 57.4212% ( 305) 00:10:14.036 2.133 - 2.145: 67.4641% ( 1335) 00:10:14.036 2.145 - 2.157: 71.4737% ( 533) 00:10:14.037 2.157 - 2.169: 73.2265% ( 233) 00:10:14.037 2.169 - 2.181: 76.7020% ( 462) 00:10:14.037 2.181 - 2.193: 78.0937% ( 185) 00:10:14.037 2.193 - 2.204: 79.0792% ( 131) 00:10:14.037 2.204 - 2.216: 82.8331% ( 499) 00:10:14.037 2.216 - 2.228: 85.2855% ( 326) 00:10:14.037 2.228 - 2.240: 86.7449% ( 194) 00:10:14.037 2.240 - 2.252: 88.2946% ( 206) 00:10:14.037 2.252 - 2.264: 88.9942% ( 93) 00:10:14.037 2.264 - 2.276: 89.3177% ( 43) 00:10:14.037 2.276 - 2.287: 89.8894% ( 76) 00:10:14.037 2.287 - 2.299: 90.8523% ( 128) 00:10:14.037 2.299 - 2.311: 91.7400% ( 118) 00:10:14.037 2.311 - 2.323: 92.3569% ( 82) 00:10:14.037 2.323 - 2.335: 92.7180% ( 48) 00:10:14.037 2.335 - 2.347: 93.0038% ( 38) 00:10:14.037 2.347 - 2.359: 93.3198% ( 42) 00:10:14.037 2.359 - 2.370: 93.7636% ( 59) 00:10:14.037 2.370 - 2.382: 94.2150% ( 60) 00:10:14.037 2.382 - 2.394: 94.6889% ( 63) 00:10:14.037 2.394 - 2.406: 94.9898% ( 40) 00:10:14.037 2.406 - 2.418: 95.1930% ( 27) 00:10:14.037 2.418 - 2.430: 95.3961% ( 27) 00:10:14.037 2.430 - 2.441: 95.5917% ( 26) 00:10:14.037 2.441 - 2.453: 95.8625% ( 36) 00:10:14.037 2.453 - 2.465: 95.9904% ( 17) 00:10:14.037 2.465 - 2.477: 96.2236% ( 31) 00:10:14.037 2.477 - 2.489: 96.3590% ( 18) 00:10:14.037 2.489 - 2.501: 96.5019% ( 19) 00:10:14.037 2.501 - 2.513: 96.6449% ( 19) 00:10:14.037 2.513 - 2.524: 96.7351% ( 12) 00:10:14.037 2.524 - 2.536: 96.8179% ( 11) 00:10:14.037 2.536 - 2.548: 96.8931% ( 10) 00:10:14.037 2.548 - 2.560: 96.9307% ( 5) 00:10:14.037 2.560 - 2.572: 96.9984% ( 9) 00:10:14.037 2.572 - 2.584: 97.0586% ( 8) 00:10:14.037 2.584 - 2.596: 97.0812% ( 3) 00:10:14.037 2.596 - 2.607: 97.1263% ( 6) 00:10:14.037 2.607 - 2.619: 97.1865% ( 8) 00:10:14.037 2.619 - 2.631: 97.2467% ( 8) 00:10:14.037 2.631 - 2.643: 97.2993% ( 7) 00:10:14.037 2.643 - 2.655: 97.3595% ( 8) 00:10:14.037 2.655 - 2.667: 97.3896% ( 4) 00:10:14.037 2.667 - 2.679: 97.4573% ( 9) 00:10:14.037 2.679 - 2.690: 97.5100% ( 7) 00:10:14.037 2.690 - 2.702: 97.5250% ( 2) 00:10:14.037 2.702 - 2.714: 97.5777% ( 7) 00:10:14.037 2.714 - 2.726: 97.6078% ( 4) 00:10:14.037 2.726 - 2.738: 97.6454% ( 5) 00:10:14.037 2.738 - 2.750: 97.6604% ( 2) 00:10:14.037 2.750 - 2.761: 97.7131% ( 7) 00:10:14.037 2.761 - 2.773: 97.7582% ( 6) 00:10:14.037 2.773 - 2.785: 97.7657% ( 1) 00:10:14.037 2.785 - 2.797: 97.7883% ( 3) 00:10:14.037 2.797 - 2.809: 97.8034% ( 2) 00:10:14.037 2.809 - 2.821: 97.8485% ( 6) 00:10:14.037 2.821 - 2.833: 97.8635% ( 2) 00:10:14.037 2.833 - 2.844: 97.9087% ( 6) 00:10:14.037 2.844 - 2.856: 97.9237% ( 2) 00:10:14.037 2.856 - 2.868: 97.9463% ( 3) 00:10:14.037 2.868 - 2.880: 97.9613% ( 2) 00:10:14.037 2.880 - 2.892: 97.9839% ( 3) 00:10:14.037 2.892 - 2.904: 98.0065% ( 3) 00:10:14.037 2.904 - 2.916: 98.0140% ( 1) 00:10:14.037 2.916 - 2.927: 98.0290% ( 2) 00:10:14.037 2.927 - 2.939: 98.0366% ( 1) 00:10:14.037 2.939 - 2.951: 98.0441% ( 1) 00:10:14.037 2.951 - 2.963: 98.0591% ( 2) 00:10:14.037 2.963 - 2.975: 98.0667% ( 1) 00:10:14.037 2.975 - 2.987: 98.0817% ( 2) 00:10:14.037 2.999 - 3.010: 98.0892% ( 1) 00:10:14.037 3.010 - 3.022: 98.1043% ( 2) 00:10:14.037 3.022 - 3.034: 98.1118% ( 1) 00:10:14.037 3.034 - 3.058: 98.1268% ( 2) 00:10:14.037 3.058 - 3.081: 98.1494% ( 3) 00:10:14.037 3.081 - 3.105: 98.1720% ( 3) 00:10:14.037 3.105 - 3.129: 98.1870% ( 2) 00:10:14.037 3.129 - 3.153: 98.2021% ( 2) 00:10:14.037 3.153 - 3.176: 98.2171% ( 2) 00:10:14.037 3.176 - 3.200: 98.2246% ( 1) 00:10:14.037 3.200 - 3.224: 98.2397% ( 2) 00:10:14.037 3.224 - 3.247: 98.2472% ( 1) 00:10:14.037 3.247 - 3.271: 98.2547% ( 1) 00:10:14.037 3.271 - 3.295: 98.2622% ( 1) 00:10:14.037 3.319 - 3.342: 98.2698% ( 1) 00:10:14.037 3.342 - 3.366: 98.2923% ( 3) 00:10:14.037 3.366 - 3.390: 98.3149% ( 3) 00:10:14.037 3.390 - 3.413: 98.3299% ( 2) 00:10:14.037 3.413 - 3.437: 98.3375% ( 1) 00:10:14.037 3.437 - 3.461: 98.3525% ( 2) 00:10:14.037 3.461 - 3.484: 98.3600%[2024-07-15 17:33:09.123824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:14.037 ( 1) 00:10:14.037 3.484 - 3.508: 98.3826% ( 3) 00:10:14.037 3.508 - 3.532: 98.4052% ( 3) 00:10:14.037 3.532 - 3.556: 98.4127% ( 1) 00:10:14.037 3.556 - 3.579: 98.4277% ( 2) 00:10:14.037 3.579 - 3.603: 98.4428% ( 2) 00:10:14.037 3.603 - 3.627: 98.4503% ( 1) 00:10:14.037 3.627 - 3.650: 98.4578% ( 1) 00:10:14.037 3.698 - 3.721: 98.4729% ( 2) 00:10:14.037 3.721 - 3.745: 98.4804% ( 1) 00:10:14.037 3.769 - 3.793: 98.5030% ( 3) 00:10:14.037 3.816 - 3.840: 98.5105% ( 1) 00:10:14.037 3.840 - 3.864: 98.5255% ( 2) 00:10:14.037 3.864 - 3.887: 98.5406% ( 2) 00:10:14.037 3.887 - 3.911: 98.5481% ( 1) 00:10:14.037 3.911 - 3.935: 98.5556% ( 1) 00:10:14.037 3.959 - 3.982: 98.5707% ( 2) 00:10:14.037 4.006 - 4.030: 98.5782% ( 1) 00:10:14.037 4.030 - 4.053: 98.5857% ( 1) 00:10:14.037 4.124 - 4.148: 98.6083% ( 3) 00:10:14.037 4.172 - 4.196: 98.6158% ( 1) 00:10:14.037 4.196 - 4.219: 98.6233% ( 1) 00:10:14.037 4.243 - 4.267: 98.6309% ( 1) 00:10:14.037 4.314 - 4.338: 98.6384% ( 1) 00:10:14.037 4.433 - 4.456: 98.6459% ( 1) 00:10:14.037 4.456 - 4.480: 98.6534% ( 1) 00:10:14.037 4.504 - 4.527: 98.6609% ( 1) 00:10:14.037 4.622 - 4.646: 98.6685% ( 1) 00:10:14.037 4.646 - 4.670: 98.6760% ( 1) 00:10:14.038 4.907 - 4.930: 98.6835% ( 1) 00:10:14.038 5.428 - 5.452: 98.6910% ( 1) 00:10:14.038 5.665 - 5.689: 98.6986% ( 1) 00:10:14.038 5.831 - 5.855: 98.7061% ( 1) 00:10:14.038 5.879 - 5.902: 98.7136% ( 1) 00:10:14.038 6.210 - 6.258: 98.7211% ( 1) 00:10:14.038 6.258 - 6.305: 98.7287% ( 1) 00:10:14.038 6.447 - 6.495: 98.7362% ( 1) 00:10:14.038 6.684 - 6.732: 98.7437% ( 1) 00:10:14.038 6.779 - 6.827: 98.7512% ( 1) 00:10:14.038 7.064 - 7.111: 98.7587% ( 1) 00:10:14.038 7.159 - 7.206: 98.7663% ( 1) 00:10:14.038 7.490 - 7.538: 98.7738% ( 1) 00:10:14.038 8.439 - 8.486: 98.7813% ( 1) 00:10:14.038 9.055 - 9.102: 98.7888% ( 1) 00:10:14.038 9.624 - 9.671: 98.7964% ( 1) 00:10:14.038 10.809 - 10.856: 98.8039% ( 1) 00:10:14.038 12.421 - 12.516: 98.8114% ( 1) 00:10:14.038 15.550 - 15.644: 98.8265% ( 2) 00:10:14.038 15.644 - 15.739: 98.8490% ( 3) 00:10:14.038 15.739 - 15.834: 98.8565% ( 1) 00:10:14.038 15.834 - 15.929: 98.8716% ( 2) 00:10:14.038 15.929 - 16.024: 98.8866% ( 2) 00:10:14.038 16.024 - 16.119: 98.9393% ( 7) 00:10:14.038 16.119 - 16.213: 98.9694% ( 4) 00:10:14.038 16.213 - 16.308: 98.9920% ( 3) 00:10:14.038 16.308 - 16.403: 99.0371% ( 6) 00:10:14.038 16.403 - 16.498: 99.0672% ( 4) 00:10:14.038 16.498 - 16.593: 99.1499% ( 11) 00:10:14.038 16.593 - 16.687: 99.1725% ( 3) 00:10:14.038 16.687 - 16.782: 99.2101% ( 5) 00:10:14.038 16.782 - 16.877: 99.2252% ( 2) 00:10:14.038 16.877 - 16.972: 99.2402% ( 2) 00:10:14.038 16.972 - 17.067: 99.2628% ( 3) 00:10:14.038 17.161 - 17.256: 99.2853% ( 3) 00:10:14.038 17.256 - 17.351: 99.3004% ( 2) 00:10:14.038 17.351 - 17.446: 99.3079% ( 1) 00:10:14.038 17.541 - 17.636: 99.3154% ( 1) 00:10:14.038 17.636 - 17.730: 99.3305% ( 2) 00:10:14.038 17.730 - 17.825: 99.3380% ( 1) 00:10:14.038 18.204 - 18.299: 99.3455% ( 1) 00:10:14.038 18.489 - 18.584: 99.3530% ( 1) 00:10:14.038 21.144 - 21.239: 99.3606% ( 1) 00:10:14.038 25.410 - 25.600: 99.3681% ( 1) 00:10:14.038 30.341 - 30.530: 99.3756% ( 1) 00:10:14.038 39.253 - 39.443: 99.3831% ( 1) 00:10:14.038 39.822 - 40.012: 99.3907% ( 1) 00:10:14.038 1978.216 - 1990.353: 99.3982% ( 1) 00:10:14.038 2621.440 - 2633.576: 99.4057% ( 1) 00:10:14.038 3203.982 - 3228.255: 99.4132% ( 1) 00:10:14.038 3956.433 - 3980.705: 99.4207% ( 1) 00:10:14.038 3980.705 - 4004.978: 99.8270% ( 54) 00:10:14.038 4004.978 - 4029.250: 99.9699% ( 19) 00:10:14.038 4029.250 - 4053.523: 99.9850% ( 2) 00:10:14.038 5000.154 - 5024.427: 99.9925% ( 1) 00:10:14.038 5995.330 - 6019.603: 100.0000% ( 1) 00:10:14.038 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.297 [ 00:10:14.297 { 00:10:14.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.297 "subtype": "Discovery", 00:10:14.297 "listen_addresses": [], 00:10:14.297 "allow_any_host": true, 00:10:14.297 "hosts": [] 00:10:14.297 }, 00:10:14.297 { 00:10:14.297 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.297 "subtype": "NVMe", 00:10:14.297 "listen_addresses": [ 00:10:14.297 { 00:10:14.297 "trtype": "VFIOUSER", 00:10:14.297 "adrfam": "IPv4", 00:10:14.297 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.297 "trsvcid": "0" 00:10:14.297 } 00:10:14.297 ], 00:10:14.297 "allow_any_host": true, 00:10:14.297 "hosts": [], 00:10:14.297 "serial_number": "SPDK1", 00:10:14.297 "model_number": "SPDK bdev Controller", 00:10:14.297 "max_namespaces": 32, 00:10:14.297 "min_cntlid": 1, 00:10:14.297 "max_cntlid": 65519, 00:10:14.297 "namespaces": [ 00:10:14.297 { 00:10:14.297 "nsid": 1, 00:10:14.297 "bdev_name": "Malloc1", 00:10:14.297 "name": "Malloc1", 00:10:14.297 "nguid": "F33C38A63F3D4A4E94CEF8A12CE4027A", 00:10:14.297 "uuid": "f33c38a6-3f3d-4a4e-94ce-f8a12ce4027a" 00:10:14.297 }, 00:10:14.297 { 00:10:14.297 "nsid": 2, 00:10:14.297 "bdev_name": "Malloc3", 00:10:14.297 "name": "Malloc3", 00:10:14.297 "nguid": "93D1471B486B424B94129F74B0410FA5", 00:10:14.297 "uuid": "93d1471b-486b-424b-9412-9f74b0410fa5" 00:10:14.297 } 00:10:14.297 ] 00:10:14.297 }, 00:10:14.297 { 00:10:14.297 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.297 "subtype": "NVMe", 00:10:14.297 "listen_addresses": [ 00:10:14.297 { 00:10:14.297 "trtype": "VFIOUSER", 00:10:14.297 "adrfam": "IPv4", 00:10:14.297 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.297 "trsvcid": "0" 00:10:14.297 } 00:10:14.297 ], 00:10:14.297 "allow_any_host": true, 00:10:14.297 "hosts": [], 00:10:14.297 "serial_number": "SPDK2", 00:10:14.297 "model_number": "SPDK bdev Controller", 00:10:14.297 "max_namespaces": 32, 00:10:14.297 "min_cntlid": 1, 00:10:14.297 "max_cntlid": 65519, 00:10:14.297 "namespaces": [ 00:10:14.297 { 00:10:14.297 "nsid": 1, 00:10:14.297 "bdev_name": "Malloc2", 00:10:14.297 "name": "Malloc2", 00:10:14.297 "nguid": "F5126E9E58D84AA8A3045C8F0B38953E", 00:10:14.297 "uuid": "f5126e9e-58d8-4aa8-a304-5c8f0b38953e" 00:10:14.297 } 00:10:14.297 ] 00:10:14.297 } 00:10:14.297 ] 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2180959 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.297 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:14.557 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.557 [2024-07-15 17:33:09.583359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.816 Malloc4 00:10:14.816 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:14.816 [2024-07-15 17:33:09.944913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.076 17:33:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:15.076 Asynchronous Event Request test 00:10:15.076 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.076 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.076 Registering asynchronous event callbacks... 00:10:15.076 Starting namespace attribute notice tests for all controllers... 00:10:15.076 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:15.076 aer_cb - Changed Namespace 00:10:15.076 Cleaning up... 00:10:15.076 [ 00:10:15.076 { 00:10:15.076 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.076 "subtype": "Discovery", 00:10:15.076 "listen_addresses": [], 00:10:15.076 "allow_any_host": true, 00:10:15.076 "hosts": [] 00:10:15.076 }, 00:10:15.076 { 00:10:15.076 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:15.076 "subtype": "NVMe", 00:10:15.076 "listen_addresses": [ 00:10:15.076 { 00:10:15.076 "trtype": "VFIOUSER", 00:10:15.076 "adrfam": "IPv4", 00:10:15.076 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:15.076 "trsvcid": "0" 00:10:15.076 } 00:10:15.076 ], 00:10:15.076 "allow_any_host": true, 00:10:15.076 "hosts": [], 00:10:15.076 "serial_number": "SPDK1", 00:10:15.076 "model_number": "SPDK bdev Controller", 00:10:15.076 "max_namespaces": 32, 00:10:15.076 "min_cntlid": 1, 00:10:15.076 "max_cntlid": 65519, 00:10:15.076 "namespaces": [ 00:10:15.076 { 00:10:15.076 "nsid": 1, 00:10:15.076 "bdev_name": "Malloc1", 00:10:15.076 "name": "Malloc1", 00:10:15.076 "nguid": "F33C38A63F3D4A4E94CEF8A12CE4027A", 00:10:15.076 "uuid": "f33c38a6-3f3d-4a4e-94ce-f8a12ce4027a" 00:10:15.076 }, 00:10:15.076 { 00:10:15.076 "nsid": 2, 00:10:15.076 "bdev_name": "Malloc3", 00:10:15.076 "name": "Malloc3", 00:10:15.076 "nguid": "93D1471B486B424B94129F74B0410FA5", 00:10:15.076 "uuid": "93d1471b-486b-424b-9412-9f74b0410fa5" 00:10:15.076 } 00:10:15.076 ] 00:10:15.076 }, 00:10:15.076 { 00:10:15.076 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:15.076 "subtype": "NVMe", 00:10:15.076 "listen_addresses": [ 00:10:15.076 { 00:10:15.076 "trtype": "VFIOUSER", 00:10:15.076 "adrfam": "IPv4", 00:10:15.076 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:15.076 "trsvcid": "0" 00:10:15.076 } 00:10:15.076 ], 00:10:15.076 "allow_any_host": true, 00:10:15.076 "hosts": [], 00:10:15.077 "serial_number": "SPDK2", 00:10:15.077 "model_number": "SPDK bdev Controller", 00:10:15.077 "max_namespaces": 32, 00:10:15.077 "min_cntlid": 1, 00:10:15.077 "max_cntlid": 65519, 00:10:15.077 "namespaces": [ 00:10:15.077 { 00:10:15.077 "nsid": 1, 00:10:15.077 "bdev_name": "Malloc2", 00:10:15.077 "name": "Malloc2", 00:10:15.077 "nguid": "F5126E9E58D84AA8A3045C8F0B38953E", 00:10:15.077 "uuid": "f5126e9e-58d8-4aa8-a304-5c8f0b38953e" 00:10:15.077 }, 00:10:15.077 { 00:10:15.077 "nsid": 2, 00:10:15.077 "bdev_name": "Malloc4", 00:10:15.077 "name": "Malloc4", 00:10:15.077 "nguid": "22A92DBEECF6457193F623DC4AD431A5", 00:10:15.077 "uuid": "22a92dbe-ecf6-4571-93f6-23dc4ad431a5" 00:10:15.077 } 00:10:15.077 ] 00:10:15.077 } 00:10:15.077 ] 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2180959 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2174729 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2174729 ']' 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2174729 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2174729 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2174729' 00:10:15.335 killing process with pid 2174729 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2174729 00:10:15.335 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2174729 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2181105 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2181105' 00:10:15.604 Process pid: 2181105 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2181105 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2181105 ']' 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.604 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:15.604 [2024-07-15 17:33:10.667014] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:15.604 [2024-07-15 17:33:10.668083] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:15.604 [2024-07-15 17:33:10.668141] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.604 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.604 [2024-07-15 17:33:10.725808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.869 [2024-07-15 17:33:10.836862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.869 [2024-07-15 17:33:10.836963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.869 [2024-07-15 17:33:10.836977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.869 [2024-07-15 17:33:10.836988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.869 [2024-07-15 17:33:10.836997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.869 [2024-07-15 17:33:10.837051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.869 [2024-07-15 17:33:10.837111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.869 [2024-07-15 17:33:10.837175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.869 [2024-07-15 17:33:10.837178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.869 [2024-07-15 17:33:10.949267] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:15.869 [2024-07-15 17:33:10.949507] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:15.869 [2024-07-15 17:33:10.949780] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:15.869 [2024-07-15 17:33:10.950464] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:15.869 [2024-07-15 17:33:10.950687] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:15.869 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.869 17:33:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:15.869 17:33:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:17.248 17:33:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:17.248 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:17.248 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:17.248 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:17.248 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:17.248 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:17.505 Malloc1 00:10:17.505 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:17.763 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:18.022 17:33:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:18.283 17:33:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:18.283 17:33:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:18.283 17:33:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:18.542 Malloc2 00:10:18.543 17:33:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:18.800 17:33:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:19.058 17:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2181105 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2181105 ']' 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2181105 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2181105 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:19.315 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2181105' 00:10:19.315 killing process with pid 2181105 00:10:19.316 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2181105 00:10:19.316 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2181105 00:10:19.574 17:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:19.574 17:33:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:19.574 00:10:19.574 real 0m53.216s 00:10:19.574 user 3m30.081s 00:10:19.574 sys 0m4.245s 00:10:19.574 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.574 17:33:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:19.574 ************************************ 00:10:19.574 END TEST nvmf_vfio_user 00:10:19.574 ************************************ 00:10:19.574 17:33:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:19.574 17:33:14 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:19.574 17:33:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.574 17:33:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.574 17:33:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.574 ************************************ 00:10:19.574 START TEST nvmf_vfio_user_nvme_compliance 00:10:19.574 ************************************ 00:10:19.574 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:19.834 * Looking for test storage... 00:10:19.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2181704 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2181704' 00:10:19.834 Process pid: 2181704 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2181704 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2181704 ']' 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.834 17:33:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:19.834 [2024-07-15 17:33:14.797799] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:10:19.834 [2024-07-15 17:33:14.797906] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.834 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.834 [2024-07-15 17:33:14.855035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.834 [2024-07-15 17:33:14.960869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.834 [2024-07-15 17:33:14.960960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.834 [2024-07-15 17:33:14.960973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.834 [2024-07-15 17:33:14.960984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.834 [2024-07-15 17:33:14.960993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.834 [2024-07-15 17:33:14.961058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.834 [2024-07-15 17:33:14.961116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.834 [2024-07-15 17:33:14.961118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.095 17:33:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.095 17:33:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:20.095 17:33:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 malloc0 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.031 17:33:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:21.290 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.290 00:10:21.290 00:10:21.290 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.290 http://cunit.sourceforge.net/ 00:10:21.290 00:10:21.290 00:10:21.290 Suite: nvme_compliance 00:10:21.290 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 17:33:16.312400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.290 [2024-07-15 17:33:16.313850] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:21.290 [2024-07-15 17:33:16.313896] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:21.290 [2024-07-15 17:33:16.313909] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:21.290 [2024-07-15 17:33:16.315419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.290 passed 00:10:21.290 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 17:33:16.400989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.290 [2024-07-15 17:33:16.404012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.549 passed 00:10:21.549 Test: admin_identify_ns ...[2024-07-15 17:33:16.492523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.549 [2024-07-15 17:33:16.552908] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:21.549 [2024-07-15 17:33:16.560891] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:21.549 [2024-07-15 17:33:16.582018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.549 passed 00:10:21.549 Test: admin_get_features_mandatory_features ...[2024-07-15 17:33:16.662635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.549 [2024-07-15 17:33:16.667671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.809 passed 00:10:21.809 Test: admin_get_features_optional_features ...[2024-07-15 17:33:16.753232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.809 [2024-07-15 17:33:16.756267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.809 passed 00:10:21.809 Test: admin_set_features_number_of_queues ...[2024-07-15 17:33:16.838598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.809 [2024-07-15 17:33:16.943980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.067 passed 00:10:22.067 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 17:33:17.028592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.067 [2024-07-15 17:33:17.031614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.067 passed 00:10:22.067 Test: admin_get_log_page_with_lpo ...[2024-07-15 17:33:17.109398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.067 [2024-07-15 17:33:17.177925] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:22.067 [2024-07-15 17:33:17.190965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.325 passed 00:10:22.325 Test: fabric_property_get ...[2024-07-15 17:33:17.274537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.325 [2024-07-15 17:33:17.275811] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:22.325 [2024-07-15 17:33:17.277561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.325 passed 00:10:22.325 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 17:33:17.363130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.325 [2024-07-15 17:33:17.364422] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:22.325 [2024-07-15 17:33:17.366153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.325 passed 00:10:22.325 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 17:33:17.448354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.585 [2024-07-15 17:33:17.531901] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:22.585 [2024-07-15 17:33:17.547926] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:22.585 [2024-07-15 17:33:17.553003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.585 passed 00:10:22.585 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 17:33:17.636577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.585 [2024-07-15 17:33:17.637900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:22.585 [2024-07-15 17:33:17.639598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.585 passed 00:10:22.585 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 17:33:17.720769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.845 [2024-07-15 17:33:17.795888] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:22.845 [2024-07-15 17:33:17.819889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:22.845 [2024-07-15 17:33:17.825018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.845 passed 00:10:22.845 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 17:33:17.909907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:22.845 [2024-07-15 17:33:17.911239] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:22.845 [2024-07-15 17:33:17.911294] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:22.845 [2024-07-15 17:33:17.912939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:22.845 passed 00:10:23.124 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 17:33:17.995642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.125 [2024-07-15 17:33:18.089909] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:23.125 [2024-07-15 17:33:18.097106] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:23.125 [2024-07-15 17:33:18.105888] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:23.125 [2024-07-15 17:33:18.113905] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:23.125 [2024-07-15 17:33:18.142987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.125 passed 00:10:23.125 Test: admin_create_io_sq_verify_pc ...[2024-07-15 17:33:18.225275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:23.125 [2024-07-15 17:33:18.240902] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:23.125 [2024-07-15 17:33:18.258626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:23.384 passed 00:10:23.384 Test: admin_create_io_qp_max_qps ...[2024-07-15 17:33:18.344226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.320 [2024-07-15 17:33:19.436893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:24.890 [2024-07-15 17:33:19.816736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.890 passed 00:10:24.890 Test: admin_create_io_sq_shared_cq ...[2024-07-15 17:33:19.898046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.148 [2024-07-15 17:33:20.030902] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:25.148 [2024-07-15 17:33:20.068019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.148 passed 00:10:25.148 00:10:25.148 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.148 suites 1 1 n/a 0 0 00:10:25.148 tests 18 18 18 0 0 00:10:25.148 asserts 360 360 360 0 n/a 00:10:25.148 00:10:25.148 Elapsed time = 1.557 seconds 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2181704 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2181704 ']' 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2181704 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2181704 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2181704' 00:10:25.148 killing process with pid 2181704 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2181704 00:10:25.148 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2181704 00:10:25.407 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:25.407 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:25.407 00:10:25.407 real 0m5.758s 00:10:25.407 user 0m16.150s 00:10:25.407 sys 0m0.515s 00:10:25.407 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.407 17:33:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:25.407 ************************************ 00:10:25.407 END TEST nvmf_vfio_user_nvme_compliance 00:10:25.407 ************************************ 00:10:25.407 17:33:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.407 17:33:20 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:25.407 17:33:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.407 17:33:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.407 17:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.407 ************************************ 00:10:25.407 START TEST nvmf_vfio_user_fuzz 00:10:25.407 ************************************ 00:10:25.407 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:25.665 * Looking for test storage... 00:10:25.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:25.665 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2182428 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2182428' 00:10:25.666 Process pid: 2182428 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2182428 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2182428 ']' 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.666 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:25.925 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.925 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:25.925 17:33:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:26.864 malloc0 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.864 17:33:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 17:33:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.123 17:33:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:27.123 17:33:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:59.260 Fuzzing completed. Shutting down the fuzz application 00:10:59.260 00:10:59.260 Dumping successful admin opcodes: 00:10:59.260 8, 9, 10, 24, 00:10:59.260 Dumping successful io opcodes: 00:10:59.260 0, 00:10:59.260 NS: 0x200003a1ef00 I/O qp, Total commands completed: 562872, total successful commands: 2162, random_seed: 2253578304 00:10:59.260 NS: 0x200003a1ef00 admin qp, Total commands completed: 134322, total successful commands: 1085, random_seed: 1801143360 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2182428 ']' 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182428' 00:10:59.261 killing process with pid 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2182428 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:59.261 00:10:59.261 real 0m32.409s 00:10:59.261 user 0m31.767s 00:10:59.261 sys 0m28.867s 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.261 17:33:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 ************************************ 00:10:59.261 END TEST nvmf_vfio_user_fuzz 00:10:59.261 ************************************ 00:10:59.261 17:33:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:59.261 17:33:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.261 17:33:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:59.261 17:33:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.261 17:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.261 ************************************ 00:10:59.261 START TEST nvmf_host_management 00:10:59.261 ************************************ 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:59.261 * Looking for test storage... 00:10:59.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.261 17:33:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.261 17:33:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.201 17:33:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.201 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:11:00.202 00:11:00.202 --- 10.0.0.2 ping statistics --- 00:11:00.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.202 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:11:00.202 00:11:00.202 --- 10.0.0.1 ping statistics --- 00:11:00.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.202 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2187877 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2187877 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2187877 ']' 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.202 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 [2024-07-15 17:33:55.211766] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:00.202 [2024-07-15 17:33:55.211850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.202 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.202 [2024-07-15 17:33:55.279415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.461 [2024-07-15 17:33:55.391997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.461 [2024-07-15 17:33:55.392051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.461 [2024-07-15 17:33:55.392065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.461 [2024-07-15 17:33:55.392077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.461 [2024-07-15 17:33:55.392086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.461 [2024-07-15 17:33:55.392169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.461 [2024-07-15 17:33:55.392212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.461 [2024-07-15 17:33:55.392234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.461 [2024-07-15 17:33:55.392238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.461 [2024-07-15 17:33:55.553767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.461 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.461 Malloc0 00:11:00.720 [2024-07-15 17:33:55.613187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2187933 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2187933 /var/tmp/bdevperf.sock 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2187933 ']' 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:00.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.720 { 00:11:00.720 "params": { 00:11:00.720 "name": "Nvme$subsystem", 00:11:00.720 "trtype": "$TEST_TRANSPORT", 00:11:00.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.720 "adrfam": "ipv4", 00:11:00.720 "trsvcid": "$NVMF_PORT", 00:11:00.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.720 "hdgst": ${hdgst:-false}, 00:11:00.720 "ddgst": ${ddgst:-false} 00:11:00.720 }, 00:11:00.720 "method": "bdev_nvme_attach_controller" 00:11:00.720 } 00:11:00.720 EOF 00:11:00.720 )") 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:00.720 17:33:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.720 "params": { 00:11:00.720 "name": "Nvme0", 00:11:00.720 "trtype": "tcp", 00:11:00.720 "traddr": "10.0.0.2", 00:11:00.720 "adrfam": "ipv4", 00:11:00.720 "trsvcid": "4420", 00:11:00.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:00.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:00.720 "hdgst": false, 00:11:00.720 "ddgst": false 00:11:00.720 }, 00:11:00.720 "method": "bdev_nvme_attach_controller" 00:11:00.720 }' 00:11:00.720 [2024-07-15 17:33:55.686404] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:00.720 [2024-07-15 17:33:55.686501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187933 ] 00:11:00.720 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.720 [2024-07-15 17:33:55.749026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.979 [2024-07-15 17:33:55.860670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.979 Running I/O for 10 seconds... 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:01.237 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.497 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.497 [2024-07-15 17:33:56.492087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.497 [2024-07-15 17:33:56.492353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.492706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980380 is same with the state(5) to be set 00:11:01.498 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.498 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:01.498 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.498 [2024-07-15 17:33:56.496882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.498 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:01.498 [2024-07-15 17:33:56.496939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.496957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.498 [2024-07-15 17:33:56.496972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.496986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.498 [2024-07-15 17:33:56.497001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.498 [2024-07-15 17:33:56.497030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9b790 is same with the state(5) to be set 00:11:01.498 [2024-07-15 17:33:56.497112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.498 [2024-07-15 17:33:56.497537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.498 [2024-07-15 17:33:56.497557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.497966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.497987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.499 [2024-07-15 17:33:56.498679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.499 [2024-07-15 17:33:56.498697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.498978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.498994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:01.500 [2024-07-15 17:33:56.499314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.500 [2024-07-15 17:33:56.499414] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ac900 was disconnected and freed. reset controller. 00:11:01.500 [2024-07-15 17:33:56.500674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:01.500 task offset: 68096 on job bdev=Nvme0n1 fails 00:11:01.500 00:11:01.500 Latency(us) 00:11:01.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:01.500 Job: Nvme0n1 ended in about 0.41 seconds with error 00:11:01.500 Verification LBA range: start 0x0 length 0x400 00:11:01.500 Nvme0n1 : 0.41 1293.78 80.86 155.64 0.00 42956.68 2985.53 39418.69 00:11:01.500 =================================================================================================================== 00:11:01.500 Total : 1293.78 80.86 155.64 0.00 42956.68 2985.53 39418.69 00:11:01.500 [2024-07-15 17:33:56.502793] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.500 [2024-07-15 17:33:56.502821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9b790 (9): Bad file descriptor 00:11:01.500 17:33:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.500 17:33:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:01.500 [2024-07-15 17:33:56.509229] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2187933 00:11:02.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2187933) - No such process 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:02.433 { 00:11:02.433 "params": { 00:11:02.433 "name": "Nvme$subsystem", 00:11:02.433 "trtype": "$TEST_TRANSPORT", 00:11:02.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.433 "adrfam": "ipv4", 00:11:02.433 "trsvcid": "$NVMF_PORT", 00:11:02.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.433 "hdgst": ${hdgst:-false}, 00:11:02.433 "ddgst": ${ddgst:-false} 00:11:02.433 }, 00:11:02.433 "method": "bdev_nvme_attach_controller" 00:11:02.433 } 00:11:02.433 EOF 00:11:02.433 )") 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:02.433 17:33:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:02.433 "params": { 00:11:02.433 "name": "Nvme0", 00:11:02.433 "trtype": "tcp", 00:11:02.433 "traddr": "10.0.0.2", 00:11:02.433 "adrfam": "ipv4", 00:11:02.433 "trsvcid": "4420", 00:11:02.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:02.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:02.433 "hdgst": false, 00:11:02.433 "ddgst": false 00:11:02.433 }, 00:11:02.433 "method": "bdev_nvme_attach_controller" 00:11:02.433 }' 00:11:02.433 [2024-07-15 17:33:57.554797] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:02.433 [2024-07-15 17:33:57.554896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188201 ] 00:11:02.691 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.691 [2024-07-15 17:33:57.615054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.691 [2024-07-15 17:33:57.726954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.949 Running I/O for 1 seconds... 00:11:03.884 00:11:03.884 Latency(us) 00:11:03.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:03.884 Verification LBA range: start 0x0 length 0x400 00:11:03.884 Nvme0n1 : 1.05 1598.77 99.92 0.00 0.00 37807.55 2876.30 43496.49 00:11:03.884 =================================================================================================================== 00:11:03.884 Total : 1598.77 99.92 0.00 0.00 37807.55 2876.30 43496.49 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.143 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.143 rmmod nvme_tcp 00:11:04.143 rmmod nvme_fabrics 00:11:04.143 rmmod nvme_keyring 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2187877 ']' 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2187877 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2187877 ']' 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2187877 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2187877 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2187877' 00:11:04.400 killing process with pid 2187877 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2187877 00:11:04.400 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2187877 00:11:04.658 [2024-07-15 17:33:59.598153] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.658 17:33:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.560 17:34:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.560 17:34:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:06.560 00:11:06.560 real 0m8.725s 00:11:06.560 user 0m19.645s 00:11:06.560 sys 0m2.700s 00:11:06.560 17:34:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.560 17:34:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:06.560 ************************************ 00:11:06.560 END TEST nvmf_host_management 00:11:06.560 ************************************ 00:11:06.560 17:34:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:06.560 17:34:01 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:06.560 17:34:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.560 17:34:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.818 17:34:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.818 ************************************ 00:11:06.818 START TEST nvmf_lvol 00:11:06.818 ************************************ 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:06.818 * Looking for test storage... 00:11:06.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.818 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.819 17:34:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:11:08.721 00:11:08.721 --- 10.0.0.2 ping statistics --- 00:11:08.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.721 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:08.721 00:11:08.721 --- 10.0.0.1 ping statistics --- 00:11:08.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.721 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.721 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2190405 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2190405 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2190405 ']' 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.722 17:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:08.980 [2024-07-15 17:34:03.880453] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:08.980 [2024-07-15 17:34:03.880521] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.980 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.980 [2024-07-15 17:34:03.942201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.980 [2024-07-15 17:34:04.047289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.980 [2024-07-15 17:34:04.047340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.980 [2024-07-15 17:34:04.047365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.980 [2024-07-15 17:34:04.047376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.980 [2024-07-15 17:34:04.047400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.980 [2024-07-15 17:34:04.047501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.980 [2024-07-15 17:34:04.047566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.980 [2024-07-15 17:34:04.047569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.238 17:34:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.494 [2024-07-15 17:34:04.468964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.494 17:34:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:09.752 17:34:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:09.752 17:34:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.009 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:10.009 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:10.266 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:10.524 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a09f3498-4bc7-49c5-9613-e42bd1f85ebf 00:11:10.524 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a09f3498-4bc7-49c5-9613-e42bd1f85ebf lvol 20 00:11:10.782 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=16dae1a3-bd8c-4ce2-bb85-ad7d3f8c468a 00:11:10.782 17:34:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:11.040 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16dae1a3-bd8c-4ce2-bb85-ad7d3f8c468a 00:11:11.297 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:11.555 [2024-07-15 17:34:06.593981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.555 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.812 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2190710 00:11:11.812 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:11.812 17:34:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:11.812 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.744 17:34:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 16dae1a3-bd8c-4ce2-bb85-ad7d3f8c468a MY_SNAPSHOT 00:11:13.341 17:34:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=62bed62c-afb6-4719-bddd-64dd12e9fb87 00:11:13.341 17:34:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 16dae1a3-bd8c-4ce2-bb85-ad7d3f8c468a 30 00:11:13.605 17:34:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 62bed62c-afb6-4719-bddd-64dd12e9fb87 MY_CLONE 00:11:13.605 17:34:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=282f6115-255f-4609-9b64-056a0cc11b0a 00:11:13.605 17:34:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 282f6115-255f-4609-9b64-056a0cc11b0a 00:11:14.172 17:34:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2190710 00:11:22.281 Initializing NVMe Controllers 00:11:22.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:22.281 Controller IO queue size 128, less than required. 00:11:22.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:22.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:22.281 Initialization complete. Launching workers. 00:11:22.281 ======================================================== 00:11:22.281 Latency(us) 00:11:22.281 Device Information : IOPS MiB/s Average min max 00:11:22.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10039.90 39.22 12752.74 2256.99 90881.25 00:11:22.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10548.80 41.21 12140.37 2049.73 60488.04 00:11:22.281 ======================================================== 00:11:22.281 Total : 20588.70 80.42 12438.99 2049.73 90881.25 00:11:22.281 00:11:22.281 17:34:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:22.553 17:34:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16dae1a3-bd8c-4ce2-bb85-ad7d3f8c468a 00:11:22.811 17:34:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a09f3498-4bc7-49c5-9613-e42bd1f85ebf 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.069 rmmod nvme_tcp 00:11:23.069 rmmod nvme_fabrics 00:11:23.069 rmmod nvme_keyring 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2190405 ']' 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2190405 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2190405 ']' 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2190405 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2190405 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2190405' 00:11:23.069 killing process with pid 2190405 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2190405 00:11:23.069 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2190405 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.636 17:34:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.540 00:11:25.540 real 0m18.813s 00:11:25.540 user 1m4.171s 00:11:25.540 sys 0m5.703s 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:25.540 ************************************ 00:11:25.540 END TEST nvmf_lvol 00:11:25.540 ************************************ 00:11:25.540 17:34:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:25.540 17:34:20 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:25.540 17:34:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.540 17:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.540 17:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.540 ************************************ 00:11:25.540 START TEST nvmf_lvs_grow 00:11:25.540 ************************************ 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:25.540 * Looking for test storage... 00:11:25.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.540 17:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.441 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.442 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:11:27.700 00:11:27.700 --- 10.0.0.2 ping statistics --- 00:11:27.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.700 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:11:27.700 00:11:27.700 --- 10.0.0.1 ping statistics --- 00:11:27.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.700 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2193971 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2193971 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2193971 ']' 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.700 17:34:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.700 [2024-07-15 17:34:22.700330] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:27.700 [2024-07-15 17:34:22.700429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.700 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.700 [2024-07-15 17:34:22.774373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.958 [2024-07-15 17:34:22.895577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.958 [2024-07-15 17:34:22.895635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.958 [2024-07-15 17:34:22.895653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.958 [2024-07-15 17:34:22.895668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.958 [2024-07-15 17:34:22.895680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.958 [2024-07-15 17:34:22.895720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.958 17:34:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:28.216 [2024-07-15 17:34:23.305514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.216 17:34:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:28.216 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.216 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.216 17:34:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:28.474 ************************************ 00:11:28.474 START TEST lvs_grow_clean 00:11:28.474 ************************************ 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:28.474 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:28.732 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:28.732 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:28.990 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:28.990 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:28.990 17:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:29.248 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:29.248 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:29.248 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ead3c50b-b119-4d3a-a2d4-74e60363d415 lvol 150 00:11:29.506 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f1478c1-b41b-4759-95bd-8ecbbd8bb03d 00:11:29.506 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:29.506 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:29.506 [2024-07-15 17:34:24.630145] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:29.506 [2024-07-15 17:34:24.630243] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:29.506 true 00:11:29.764 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:29.764 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:30.022 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:30.022 17:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:30.280 17:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f1478c1-b41b-4759-95bd-8ecbbd8bb03d 00:11:30.538 17:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:30.796 [2024-07-15 17:34:25.713412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.796 17:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2194411 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2194411 /var/tmp/bdevperf.sock 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2194411 ']' 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.054 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 [2024-07-15 17:34:26.061272] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:31.054 [2024-07-15 17:34:26.061350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194411 ] 00:11:31.054 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.054 [2024-07-15 17:34:26.122856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.313 [2024-07-15 17:34:26.239458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.313 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.313 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:31.313 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:31.570 Nvme0n1 00:11:31.570 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:31.828 [ 00:11:31.828 { 00:11:31.828 "name": "Nvme0n1", 00:11:31.828 "aliases": [ 00:11:31.828 "5f1478c1-b41b-4759-95bd-8ecbbd8bb03d" 00:11:31.828 ], 00:11:31.828 "product_name": "NVMe disk", 00:11:31.828 "block_size": 4096, 00:11:31.828 "num_blocks": 38912, 00:11:31.828 "uuid": "5f1478c1-b41b-4759-95bd-8ecbbd8bb03d", 00:11:31.828 "assigned_rate_limits": { 00:11:31.828 "rw_ios_per_sec": 0, 00:11:31.828 "rw_mbytes_per_sec": 0, 00:11:31.828 "r_mbytes_per_sec": 0, 00:11:31.828 "w_mbytes_per_sec": 0 00:11:31.828 }, 00:11:31.828 "claimed": false, 00:11:31.828 "zoned": false, 00:11:31.828 "supported_io_types": { 00:11:31.828 "read": true, 00:11:31.828 "write": true, 00:11:31.828 "unmap": true, 00:11:31.828 "flush": true, 00:11:31.828 "reset": true, 00:11:31.828 "nvme_admin": true, 00:11:31.828 "nvme_io": true, 00:11:31.828 "nvme_io_md": false, 00:11:31.828 "write_zeroes": true, 00:11:31.828 "zcopy": false, 00:11:31.828 "get_zone_info": false, 00:11:31.828 "zone_management": false, 00:11:31.828 "zone_append": false, 00:11:31.828 "compare": true, 00:11:31.828 "compare_and_write": true, 00:11:31.828 "abort": true, 00:11:31.828 "seek_hole": false, 00:11:31.828 "seek_data": false, 00:11:31.828 "copy": true, 00:11:31.828 "nvme_iov_md": false 00:11:31.828 }, 00:11:31.828 "memory_domains": [ 00:11:31.828 { 00:11:31.828 "dma_device_id": "system", 00:11:31.828 "dma_device_type": 1 00:11:31.828 } 00:11:31.828 ], 00:11:31.828 "driver_specific": { 00:11:31.828 "nvme": [ 00:11:31.828 { 00:11:31.828 "trid": { 00:11:31.828 "trtype": "TCP", 00:11:31.828 "adrfam": "IPv4", 00:11:31.828 "traddr": "10.0.0.2", 00:11:31.828 "trsvcid": "4420", 00:11:31.828 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:31.828 }, 00:11:31.828 "ctrlr_data": { 00:11:31.828 "cntlid": 1, 00:11:31.828 "vendor_id": "0x8086", 00:11:31.828 "model_number": "SPDK bdev Controller", 00:11:31.828 "serial_number": "SPDK0", 00:11:31.828 "firmware_revision": "24.09", 00:11:31.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:31.828 "oacs": { 00:11:31.828 "security": 0, 00:11:31.828 "format": 0, 00:11:31.828 "firmware": 0, 00:11:31.828 "ns_manage": 0 00:11:31.828 }, 00:11:31.828 "multi_ctrlr": true, 00:11:31.828 "ana_reporting": false 00:11:31.828 }, 00:11:31.828 "vs": { 00:11:31.828 "nvme_version": "1.3" 00:11:31.828 }, 00:11:31.828 "ns_data": { 00:11:31.828 "id": 1, 00:11:31.828 "can_share": true 00:11:31.828 } 00:11:31.828 } 00:11:31.828 ], 00:11:31.828 "mp_policy": "active_passive" 00:11:31.828 } 00:11:31.828 } 00:11:31.828 ] 00:11:31.828 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2194547 00:11:31.828 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:31.828 17:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:32.086 Running I/O for 10 seconds... 00:11:33.021 Latency(us) 00:11:33.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.021 Nvme0n1 : 1.00 14403.00 56.26 0.00 0.00 0.00 0.00 0.00 00:11:33.021 =================================================================================================================== 00:11:33.021 Total : 14403.00 56.26 0.00 0.00 0.00 0.00 0.00 00:11:33.021 00:11:33.962 17:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:33.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.962 Nvme0n1 : 2.00 14500.50 56.64 0.00 0.00 0.00 0.00 0.00 00:11:33.962 =================================================================================================================== 00:11:33.962 Total : 14500.50 56.64 0.00 0.00 0.00 0.00 0.00 00:11:33.962 00:11:34.252 true 00:11:34.252 17:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:34.252 17:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:34.511 17:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:34.511 17:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:34.511 17:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2194547 00:11:35.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.076 Nvme0n1 : 3.00 14577.00 56.94 0.00 0.00 0.00 0.00 0.00 00:11:35.076 =================================================================================================================== 00:11:35.076 Total : 14577.00 56.94 0.00 0.00 0.00 0.00 0.00 00:11:35.076 00:11:36.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.009 Nvme0n1 : 4.00 14612.75 57.08 0.00 0.00 0.00 0.00 0.00 00:11:36.009 =================================================================================================================== 00:11:36.009 Total : 14612.75 57.08 0.00 0.00 0.00 0.00 0.00 00:11:36.009 00:11:36.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.941 Nvme0n1 : 5.00 14659.00 57.26 0.00 0.00 0.00 0.00 0.00 00:11:36.941 =================================================================================================================== 00:11:36.941 Total : 14659.00 57.26 0.00 0.00 0.00 0.00 0.00 00:11:36.941 00:11:38.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.315 Nvme0n1 : 6.00 14742.50 57.59 0.00 0.00 0.00 0.00 0.00 00:11:38.315 =================================================================================================================== 00:11:38.315 Total : 14742.50 57.59 0.00 0.00 0.00 0.00 0.00 00:11:38.315 00:11:39.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.250 Nvme0n1 : 7.00 14818.57 57.89 0.00 0.00 0.00 0.00 0.00 00:11:39.250 =================================================================================================================== 00:11:39.250 Total : 14818.57 57.89 0.00 0.00 0.00 0.00 0.00 00:11:39.250 00:11:40.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.182 Nvme0n1 : 8.00 14854.12 58.02 0.00 0.00 0.00 0.00 0.00 00:11:40.182 =================================================================================================================== 00:11:40.182 Total : 14854.12 58.02 0.00 0.00 0.00 0.00 0.00 00:11:40.182 00:11:41.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.116 Nvme0n1 : 9.00 14888.78 58.16 0.00 0.00 0.00 0.00 0.00 00:11:41.116 =================================================================================================================== 00:11:41.116 Total : 14888.78 58.16 0.00 0.00 0.00 0.00 0.00 00:11:41.116 00:11:42.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.050 Nvme0n1 : 10.00 14934.20 58.34 0.00 0.00 0.00 0.00 0.00 00:11:42.050 =================================================================================================================== 00:11:42.050 Total : 14934.20 58.34 0.00 0.00 0.00 0.00 0.00 00:11:42.050 00:11:42.050 00:11:42.050 Latency(us) 00:11:42.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.050 Nvme0n1 : 10.01 14938.51 58.35 0.00 0.00 8563.65 2560.76 19320.98 00:11:42.050 =================================================================================================================== 00:11:42.050 Total : 14938.51 58.35 0.00 0.00 8563.65 2560.76 19320.98 00:11:42.050 0 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2194411 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2194411 ']' 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2194411 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2194411 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2194411' 00:11:42.050 killing process with pid 2194411 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2194411 00:11:42.050 Received shutdown signal, test time was about 10.000000 seconds 00:11:42.050 00:11:42.050 Latency(us) 00:11:42.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.050 =================================================================================================================== 00:11:42.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:42.050 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2194411 00:11:42.308 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.565 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:42.823 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:42.823 17:34:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:43.081 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:43.081 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:43.081 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:43.339 [2024-07-15 17:34:38.436579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:43.339 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:43.597 request: 00:11:43.597 { 00:11:43.597 "uuid": "ead3c50b-b119-4d3a-a2d4-74e60363d415", 00:11:43.597 "method": "bdev_lvol_get_lvstores", 00:11:43.597 "req_id": 1 00:11:43.597 } 00:11:43.597 Got JSON-RPC error response 00:11:43.597 response: 00:11:43.597 { 00:11:43.597 "code": -19, 00:11:43.597 "message": "No such device" 00:11:43.597 } 00:11:43.597 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:43.597 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.597 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:43.597 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.597 17:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:44.162 aio_bdev 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f1478c1-b41b-4759-95bd-8ecbbd8bb03d 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5f1478c1-b41b-4759-95bd-8ecbbd8bb03d 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:44.162 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:44.420 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f1478c1-b41b-4759-95bd-8ecbbd8bb03d -t 2000 00:11:44.678 [ 00:11:44.678 { 00:11:44.678 "name": "5f1478c1-b41b-4759-95bd-8ecbbd8bb03d", 00:11:44.678 "aliases": [ 00:11:44.678 "lvs/lvol" 00:11:44.678 ], 00:11:44.678 "product_name": "Logical Volume", 00:11:44.678 "block_size": 4096, 00:11:44.678 "num_blocks": 38912, 00:11:44.678 "uuid": "5f1478c1-b41b-4759-95bd-8ecbbd8bb03d", 00:11:44.678 "assigned_rate_limits": { 00:11:44.678 "rw_ios_per_sec": 0, 00:11:44.678 "rw_mbytes_per_sec": 0, 00:11:44.678 "r_mbytes_per_sec": 0, 00:11:44.678 "w_mbytes_per_sec": 0 00:11:44.678 }, 00:11:44.678 "claimed": false, 00:11:44.678 "zoned": false, 00:11:44.678 "supported_io_types": { 00:11:44.678 "read": true, 00:11:44.678 "write": true, 00:11:44.678 "unmap": true, 00:11:44.678 "flush": false, 00:11:44.678 "reset": true, 00:11:44.678 "nvme_admin": false, 00:11:44.678 "nvme_io": false, 00:11:44.678 "nvme_io_md": false, 00:11:44.678 "write_zeroes": true, 00:11:44.678 "zcopy": false, 00:11:44.678 "get_zone_info": false, 00:11:44.678 "zone_management": false, 00:11:44.678 "zone_append": false, 00:11:44.678 "compare": false, 00:11:44.678 "compare_and_write": false, 00:11:44.678 "abort": false, 00:11:44.678 "seek_hole": true, 00:11:44.678 "seek_data": true, 00:11:44.678 "copy": false, 00:11:44.678 "nvme_iov_md": false 00:11:44.678 }, 00:11:44.678 "driver_specific": { 00:11:44.678 "lvol": { 00:11:44.678 "lvol_store_uuid": "ead3c50b-b119-4d3a-a2d4-74e60363d415", 00:11:44.678 "base_bdev": "aio_bdev", 00:11:44.678 "thin_provision": false, 00:11:44.678 "num_allocated_clusters": 38, 00:11:44.678 "snapshot": false, 00:11:44.678 "clone": false, 00:11:44.678 "esnap_clone": false 00:11:44.678 } 00:11:44.678 } 00:11:44.678 } 00:11:44.678 ] 00:11:44.678 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:44.678 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:44.678 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:44.678 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:44.936 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:44.936 17:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:44.936 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:44.936 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f1478c1-b41b-4759-95bd-8ecbbd8bb03d 00:11:45.503 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ead3c50b-b119-4d3a-a2d4-74e60363d415 00:11:45.761 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:46.022 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.022 00:11:46.022 real 0m17.583s 00:11:46.022 user 0m17.015s 00:11:46.022 sys 0m1.884s 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:46.023 ************************************ 00:11:46.023 END TEST lvs_grow_clean 00:11:46.023 ************************************ 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.023 ************************************ 00:11:46.023 START TEST lvs_grow_dirty 00:11:46.023 ************************************ 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:46.023 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.024 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:46.024 17:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:46.290 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:46.290 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:46.548 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:11:46.548 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:11:46.548 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:46.806 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:46.806 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:46.806 17:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 lvol 150 00:11:47.065 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d38742fb-375d-47d3-9044-7387c3c6d899 00:11:47.065 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:47.065 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:47.324 [2024-07-15 17:34:42.272151] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:47.324 [2024-07-15 17:34:42.272264] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:47.324 true 00:11:47.324 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:11:47.324 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:47.582 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:47.582 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:47.840 17:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d38742fb-375d-47d3-9044-7387c3c6d899 00:11:48.098 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:48.357 [2024-07-15 17:34:43.351568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.357 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2196583 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2196583 /var/tmp/bdevperf.sock 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2196583 ']' 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.616 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:48.616 [2024-07-15 17:34:43.698956] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:11:48.616 [2024-07-15 17:34:43.699029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196583 ] 00:11:48.616 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.874 [2024-07-15 17:34:43.760284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.874 [2024-07-15 17:34:43.875560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.874 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.874 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:48.874 17:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:49.473 Nvme0n1 00:11:49.473 17:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:49.731 [ 00:11:49.731 { 00:11:49.731 "name": "Nvme0n1", 00:11:49.731 "aliases": [ 00:11:49.731 "d38742fb-375d-47d3-9044-7387c3c6d899" 00:11:49.731 ], 00:11:49.731 "product_name": "NVMe disk", 00:11:49.731 "block_size": 4096, 00:11:49.731 "num_blocks": 38912, 00:11:49.731 "uuid": "d38742fb-375d-47d3-9044-7387c3c6d899", 00:11:49.731 "assigned_rate_limits": { 00:11:49.731 "rw_ios_per_sec": 0, 00:11:49.731 "rw_mbytes_per_sec": 0, 00:11:49.731 "r_mbytes_per_sec": 0, 00:11:49.731 "w_mbytes_per_sec": 0 00:11:49.731 }, 00:11:49.731 "claimed": false, 00:11:49.731 "zoned": false, 00:11:49.731 "supported_io_types": { 00:11:49.731 "read": true, 00:11:49.731 "write": true, 00:11:49.731 "unmap": true, 00:11:49.731 "flush": true, 00:11:49.731 "reset": true, 00:11:49.731 "nvme_admin": true, 00:11:49.731 "nvme_io": true, 00:11:49.731 "nvme_io_md": false, 00:11:49.731 "write_zeroes": true, 00:11:49.731 "zcopy": false, 00:11:49.731 "get_zone_info": false, 00:11:49.731 "zone_management": false, 00:11:49.731 "zone_append": false, 00:11:49.731 "compare": true, 00:11:49.731 "compare_and_write": true, 00:11:49.731 "abort": true, 00:11:49.731 "seek_hole": false, 00:11:49.731 "seek_data": false, 00:11:49.731 "copy": true, 00:11:49.731 "nvme_iov_md": false 00:11:49.731 }, 00:11:49.731 "memory_domains": [ 00:11:49.731 { 00:11:49.731 "dma_device_id": "system", 00:11:49.731 "dma_device_type": 1 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "driver_specific": { 00:11:49.731 "nvme": [ 00:11:49.731 { 00:11:49.731 "trid": { 00:11:49.731 "trtype": "TCP", 00:11:49.731 "adrfam": "IPv4", 00:11:49.731 "traddr": "10.0.0.2", 00:11:49.731 "trsvcid": "4420", 00:11:49.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:49.731 }, 00:11:49.731 "ctrlr_data": { 00:11:49.731 "cntlid": 1, 00:11:49.731 "vendor_id": "0x8086", 00:11:49.731 "model_number": "SPDK bdev Controller", 00:11:49.731 "serial_number": "SPDK0", 00:11:49.731 "firmware_revision": "24.09", 00:11:49.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:49.731 "oacs": { 00:11:49.731 "security": 0, 00:11:49.731 "format": 0, 00:11:49.731 "firmware": 0, 00:11:49.731 "ns_manage": 0 00:11:49.731 }, 00:11:49.731 "multi_ctrlr": true, 00:11:49.731 "ana_reporting": false 00:11:49.731 }, 00:11:49.731 "vs": { 00:11:49.731 "nvme_version": "1.3" 00:11:49.731 }, 00:11:49.731 "ns_data": { 00:11:49.731 "id": 1, 00:11:49.731 "can_share": true 00:11:49.731 } 00:11:49.731 } 00:11:49.731 ], 00:11:49.731 "mp_policy": "active_passive" 00:11:49.731 } 00:11:49.731 } 00:11:49.731 ] 00:11:49.731 17:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2196717 00:11:49.731 17:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:49.731 17:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:49.731 Running I/O for 10 seconds... 00:11:50.664 Latency(us) 00:11:50.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.664 Nvme0n1 : 1.00 14340.00 56.02 0.00 0.00 0.00 0.00 0.00 00:11:50.664 =================================================================================================================== 00:11:50.664 Total : 14340.00 56.02 0.00 0.00 0.00 0.00 0.00 00:11:50.664 00:11:51.598 17:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:11:51.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.598 Nvme0n1 : 2.00 14505.50 56.66 0.00 0.00 0.00 0.00 0.00 00:11:51.598 =================================================================================================================== 00:11:51.598 Total : 14505.50 56.66 0.00 0.00 0.00 0.00 0.00 00:11:51.598 00:11:51.856 true 00:11:51.856 17:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:11:51.856 17:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:52.114 17:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:52.114 17:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:52.114 17:34:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2196717 00:11:52.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.679 Nvme0n1 : 3.00 14583.67 56.97 0.00 0.00 0.00 0.00 0.00 00:11:52.679 =================================================================================================================== 00:11:52.679 Total : 14583.67 56.97 0.00 0.00 0.00 0.00 0.00 00:11:52.679 00:11:53.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.612 Nvme0n1 : 4.00 14652.25 57.24 0.00 0.00 0.00 0.00 0.00 00:11:53.612 =================================================================================================================== 00:11:53.613 Total : 14652.25 57.24 0.00 0.00 0.00 0.00 0.00 00:11:53.613 00:11:54.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.986 Nvme0n1 : 5.00 14703.20 57.43 0.00 0.00 0.00 0.00 0.00 00:11:54.986 =================================================================================================================== 00:11:54.986 Total : 14703.20 57.43 0.00 0.00 0.00 0.00 0.00 00:11:54.986 00:11:55.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.919 Nvme0n1 : 6.00 14760.33 57.66 0.00 0.00 0.00 0.00 0.00 00:11:55.919 =================================================================================================================== 00:11:55.919 Total : 14760.33 57.66 0.00 0.00 0.00 0.00 0.00 00:11:55.919 00:11:56.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.853 Nvme0n1 : 7.00 14851.14 58.01 0.00 0.00 0.00 0.00 0.00 00:11:56.853 =================================================================================================================== 00:11:56.853 Total : 14851.14 58.01 0.00 0.00 0.00 0.00 0.00 00:11:56.853 00:11:57.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.785 Nvme0n1 : 8.00 14915.88 58.27 0.00 0.00 0.00 0.00 0.00 00:11:57.785 =================================================================================================================== 00:11:57.785 Total : 14915.88 58.27 0.00 0.00 0.00 0.00 0.00 00:11:57.785 00:11:58.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.716 Nvme0n1 : 9.00 14932.22 58.33 0.00 0.00 0.00 0.00 0.00 00:11:58.716 =================================================================================================================== 00:11:58.716 Total : 14932.22 58.33 0.00 0.00 0.00 0.00 0.00 00:11:58.716 00:11:59.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.650 Nvme0n1 : 10.00 15006.80 58.62 0.00 0.00 0.00 0.00 0.00 00:11:59.650 =================================================================================================================== 00:11:59.650 Total : 15006.80 58.62 0.00 0.00 0.00 0.00 0.00 00:11:59.650 00:11:59.650 00:11:59.650 Latency(us) 00:11:59.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.650 Nvme0n1 : 10.01 15006.91 58.62 0.00 0.00 8524.35 4854.52 16019.91 00:11:59.650 =================================================================================================================== 00:11:59.650 Total : 15006.91 58.62 0.00 0.00 8524.35 4854.52 16019.91 00:11:59.650 0 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2196583 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2196583 ']' 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2196583 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2196583 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2196583' 00:11:59.650 killing process with pid 2196583 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2196583 00:11:59.650 Received shutdown signal, test time was about 10.000000 seconds 00:11:59.650 00:11:59.650 Latency(us) 00:11:59.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.650 =================================================================================================================== 00:11:59.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:59.650 17:34:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2196583 00:12:00.217 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.217 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:00.475 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:00.475 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:00.751 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:00.751 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:00.751 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2193971 00:12:00.751 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2193971 00:12:01.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2193971 Killed "${NVMF_APP[@]}" "$@" 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2198035 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2198035 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2198035 ']' 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.010 17:34:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.010 [2024-07-15 17:34:55.942644] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:01.010 [2024-07-15 17:34:55.942728] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.010 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.010 [2024-07-15 17:34:56.007229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.010 [2024-07-15 17:34:56.116441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.010 [2024-07-15 17:34:56.116501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.010 [2024-07-15 17:34:56.116514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.010 [2024-07-15 17:34:56.116524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.010 [2024-07-15 17:34:56.116534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.010 [2024-07-15 17:34:56.116580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.269 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:01.528 [2024-07-15 17:34:56.533476] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:01.528 [2024-07-15 17:34:56.533613] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:01.528 [2024-07-15 17:34:56.533670] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:01.528 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d38742fb-375d-47d3-9044-7387c3c6d899 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d38742fb-375d-47d3-9044-7387c3c6d899 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:01.529 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:01.788 17:34:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d38742fb-375d-47d3-9044-7387c3c6d899 -t 2000 00:12:02.047 [ 00:12:02.047 { 00:12:02.047 "name": "d38742fb-375d-47d3-9044-7387c3c6d899", 00:12:02.047 "aliases": [ 00:12:02.047 "lvs/lvol" 00:12:02.047 ], 00:12:02.047 "product_name": "Logical Volume", 00:12:02.047 "block_size": 4096, 00:12:02.047 "num_blocks": 38912, 00:12:02.047 "uuid": "d38742fb-375d-47d3-9044-7387c3c6d899", 00:12:02.047 "assigned_rate_limits": { 00:12:02.047 "rw_ios_per_sec": 0, 00:12:02.047 "rw_mbytes_per_sec": 0, 00:12:02.047 "r_mbytes_per_sec": 0, 00:12:02.047 "w_mbytes_per_sec": 0 00:12:02.047 }, 00:12:02.047 "claimed": false, 00:12:02.047 "zoned": false, 00:12:02.047 "supported_io_types": { 00:12:02.047 "read": true, 00:12:02.047 "write": true, 00:12:02.047 "unmap": true, 00:12:02.047 "flush": false, 00:12:02.047 "reset": true, 00:12:02.047 "nvme_admin": false, 00:12:02.047 "nvme_io": false, 00:12:02.047 "nvme_io_md": false, 00:12:02.047 "write_zeroes": true, 00:12:02.047 "zcopy": false, 00:12:02.047 "get_zone_info": false, 00:12:02.047 "zone_management": false, 00:12:02.047 "zone_append": false, 00:12:02.047 "compare": false, 00:12:02.047 "compare_and_write": false, 00:12:02.047 "abort": false, 00:12:02.047 "seek_hole": true, 00:12:02.047 "seek_data": true, 00:12:02.047 "copy": false, 00:12:02.047 "nvme_iov_md": false 00:12:02.047 }, 00:12:02.047 "driver_specific": { 00:12:02.047 "lvol": { 00:12:02.047 "lvol_store_uuid": "fc656ceb-78c8-4ac6-9284-cb6e4eac1803", 00:12:02.047 "base_bdev": "aio_bdev", 00:12:02.047 "thin_provision": false, 00:12:02.047 "num_allocated_clusters": 38, 00:12:02.047 "snapshot": false, 00:12:02.047 "clone": false, 00:12:02.047 "esnap_clone": false 00:12:02.047 } 00:12:02.047 } 00:12:02.047 } 00:12:02.047 ] 00:12:02.047 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:02.047 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:02.047 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:02.306 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:02.306 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:02.306 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:02.565 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:02.565 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:02.824 [2024-07-15 17:34:57.802351] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:02.824 17:34:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:03.083 request: 00:12:03.083 { 00:12:03.083 "uuid": "fc656ceb-78c8-4ac6-9284-cb6e4eac1803", 00:12:03.083 "method": "bdev_lvol_get_lvstores", 00:12:03.083 "req_id": 1 00:12:03.083 } 00:12:03.083 Got JSON-RPC error response 00:12:03.083 response: 00:12:03.083 { 00:12:03.083 "code": -19, 00:12:03.083 "message": "No such device" 00:12:03.083 } 00:12:03.083 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:03.083 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.083 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.083 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.083 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:03.341 aio_bdev 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d38742fb-375d-47d3-9044-7387c3c6d899 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d38742fb-375d-47d3-9044-7387c3c6d899 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:03.341 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:03.600 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d38742fb-375d-47d3-9044-7387c3c6d899 -t 2000 00:12:03.858 [ 00:12:03.858 { 00:12:03.858 "name": "d38742fb-375d-47d3-9044-7387c3c6d899", 00:12:03.858 "aliases": [ 00:12:03.858 "lvs/lvol" 00:12:03.858 ], 00:12:03.858 "product_name": "Logical Volume", 00:12:03.858 "block_size": 4096, 00:12:03.858 "num_blocks": 38912, 00:12:03.858 "uuid": "d38742fb-375d-47d3-9044-7387c3c6d899", 00:12:03.858 "assigned_rate_limits": { 00:12:03.858 "rw_ios_per_sec": 0, 00:12:03.858 "rw_mbytes_per_sec": 0, 00:12:03.858 "r_mbytes_per_sec": 0, 00:12:03.858 "w_mbytes_per_sec": 0 00:12:03.858 }, 00:12:03.858 "claimed": false, 00:12:03.858 "zoned": false, 00:12:03.858 "supported_io_types": { 00:12:03.858 "read": true, 00:12:03.858 "write": true, 00:12:03.858 "unmap": true, 00:12:03.858 "flush": false, 00:12:03.858 "reset": true, 00:12:03.858 "nvme_admin": false, 00:12:03.858 "nvme_io": false, 00:12:03.858 "nvme_io_md": false, 00:12:03.858 "write_zeroes": true, 00:12:03.858 "zcopy": false, 00:12:03.858 "get_zone_info": false, 00:12:03.858 "zone_management": false, 00:12:03.858 "zone_append": false, 00:12:03.858 "compare": false, 00:12:03.858 "compare_and_write": false, 00:12:03.858 "abort": false, 00:12:03.858 "seek_hole": true, 00:12:03.858 "seek_data": true, 00:12:03.858 "copy": false, 00:12:03.858 "nvme_iov_md": false 00:12:03.858 }, 00:12:03.858 "driver_specific": { 00:12:03.858 "lvol": { 00:12:03.858 "lvol_store_uuid": "fc656ceb-78c8-4ac6-9284-cb6e4eac1803", 00:12:03.858 "base_bdev": "aio_bdev", 00:12:03.858 "thin_provision": false, 00:12:03.858 "num_allocated_clusters": 38, 00:12:03.858 "snapshot": false, 00:12:03.858 "clone": false, 00:12:03.858 "esnap_clone": false 00:12:03.858 } 00:12:03.858 } 00:12:03.858 } 00:12:03.858 ] 00:12:03.858 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:03.858 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:03.858 17:34:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:04.116 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:04.116 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:04.116 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:04.375 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:04.375 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d38742fb-375d-47d3-9044-7387c3c6d899 00:12:04.634 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc656ceb-78c8-4ac6-9284-cb6e4eac1803 00:12:04.927 17:34:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:05.186 00:12:05.186 real 0m19.125s 00:12:05.186 user 0m48.599s 00:12:05.186 sys 0m4.597s 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:05.186 ************************************ 00:12:05.186 END TEST lvs_grow_dirty 00:12:05.186 ************************************ 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:05.186 nvmf_trace.0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:05.186 rmmod nvme_tcp 00:12:05.186 rmmod nvme_fabrics 00:12:05.186 rmmod nvme_keyring 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2198035 ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2198035 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2198035 ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2198035 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2198035 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2198035' 00:12:05.186 killing process with pid 2198035 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2198035 00:12:05.186 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2198035 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.446 17:35:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.976 17:35:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.976 00:12:07.976 real 0m41.981s 00:12:07.976 user 1m11.344s 00:12:07.976 sys 0m8.239s 00:12:07.976 17:35:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.976 17:35:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:07.976 ************************************ 00:12:07.976 END TEST nvmf_lvs_grow 00:12:07.976 ************************************ 00:12:07.976 17:35:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.976 17:35:02 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:07.976 17:35:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.976 17:35:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.976 17:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.976 ************************************ 00:12:07.976 START TEST nvmf_bdev_io_wait 00:12:07.977 ************************************ 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:07.977 * Looking for test storage... 00:12:07.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.977 17:35:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:09.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:09.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:09.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:09.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:09.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:09.897 00:12:09.897 --- 10.0.0.2 ping statistics --- 00:12:09.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.897 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:09.897 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:09.897 00:12:09.897 --- 10.0.0.1 ping statistics --- 00:12:09.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.898 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2200487 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2200487 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2200487 ']' 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.898 17:35:04 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 [2024-07-15 17:35:04.881780] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:09.898 [2024-07-15 17:35:04.881910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.898 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.898 [2024-07-15 17:35:04.952362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.156 [2024-07-15 17:35:05.071333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.156 [2024-07-15 17:35:05.071391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.156 [2024-07-15 17:35:05.071408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.156 [2024-07-15 17:35:05.071421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.156 [2024-07-15 17:35:05.071433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.156 [2024-07-15 17:35:05.071518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.156 [2024-07-15 17:35:05.071575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.156 [2024-07-15 17:35:05.071693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.156 [2024-07-15 17:35:05.071695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.721 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 [2024-07-15 17:35:05.922702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 Malloc0 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:10.980 [2024-07-15 17:35:05.989335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2200690 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2200692 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:10.980 { 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme$subsystem", 00:12:10.980 "trtype": "$TEST_TRANSPORT", 00:12:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "$NVMF_PORT", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.980 "hdgst": ${hdgst:-false}, 00:12:10.980 "ddgst": ${ddgst:-false} 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 } 00:12:10.980 EOF 00:12:10.980 )") 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2200694 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:10.980 { 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme$subsystem", 00:12:10.980 "trtype": "$TEST_TRANSPORT", 00:12:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "$NVMF_PORT", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.980 "hdgst": ${hdgst:-false}, 00:12:10.980 "ddgst": ${ddgst:-false} 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 } 00:12:10.980 EOF 00:12:10.980 )") 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2200698 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:10.980 { 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme$subsystem", 00:12:10.980 "trtype": "$TEST_TRANSPORT", 00:12:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "$NVMF_PORT", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.980 "hdgst": ${hdgst:-false}, 00:12:10.980 "ddgst": ${ddgst:-false} 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 } 00:12:10.980 EOF 00:12:10.980 )") 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:10.980 { 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme$subsystem", 00:12:10.980 "trtype": "$TEST_TRANSPORT", 00:12:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "$NVMF_PORT", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.980 "hdgst": ${hdgst:-false}, 00:12:10.980 "ddgst": ${ddgst:-false} 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 } 00:12:10.980 EOF 00:12:10.980 )") 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2200690 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme1", 00:12:10.980 "trtype": "tcp", 00:12:10.980 "traddr": "10.0.0.2", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "4420", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.980 "hdgst": false, 00:12:10.980 "ddgst": false 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 }' 00:12:10.980 17:35:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme1", 00:12:10.980 "trtype": "tcp", 00:12:10.980 "traddr": "10.0.0.2", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "4420", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.980 "hdgst": false, 00:12:10.980 "ddgst": false 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 }' 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme1", 00:12:10.980 "trtype": "tcp", 00:12:10.980 "traddr": "10.0.0.2", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "4420", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.980 "hdgst": false, 00:12:10.980 "ddgst": false 00:12:10.980 }, 00:12:10.980 "method": "bdev_nvme_attach_controller" 00:12:10.980 }' 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:10.980 17:35:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:10.980 "params": { 00:12:10.980 "name": "Nvme1", 00:12:10.980 "trtype": "tcp", 00:12:10.980 "traddr": "10.0.0.2", 00:12:10.980 "adrfam": "ipv4", 00:12:10.980 "trsvcid": "4420", 00:12:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.980 "hdgst": false, 00:12:10.981 "ddgst": false 00:12:10.981 }, 00:12:10.981 "method": "bdev_nvme_attach_controller" 00:12:10.981 }' 00:12:10.981 [2024-07-15 17:35:06.036312] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:10.981 [2024-07-15 17:35:06.036312] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:10.981 [2024-07-15 17:35:06.036396] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 17:35:06.036396] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:10.981 --proc-type=auto ] 00:12:10.981 [2024-07-15 17:35:06.037249] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:10.981 [2024-07-15 17:35:06.037321] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:10.981 [2024-07-15 17:35:06.037898] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:10.981 [2024-07-15 17:35:06.037961] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:10.981 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.239 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.239 [2024-07-15 17:35:06.207285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.239 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.239 [2024-07-15 17:35:06.303960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:11.239 [2024-07-15 17:35:06.306664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.497 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.497 [2024-07-15 17:35:06.404476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.497 [2024-07-15 17:35:06.415458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.497 [2024-07-15 17:35:06.485768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.497 [2024-07-15 17:35:06.506290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:11.497 [2024-07-15 17:35:06.579747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:11.762 Running I/O for 1 seconds... 00:12:11.762 Running I/O for 1 seconds... 00:12:11.762 Running I/O for 1 seconds... 00:12:11.762 Running I/O for 1 seconds... 00:12:12.723 00:12:12.723 Latency(us) 00:12:12.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.723 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:12.723 Nvme1n1 : 1.02 6762.54 26.42 0.00 0.00 18742.02 7864.32 27379.48 00:12:12.723 =================================================================================================================== 00:12:12.723 Total : 6762.54 26.42 0.00 0.00 18742.02 7864.32 27379.48 00:12:12.723 00:12:12.723 Latency(us) 00:12:12.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.723 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:12.723 Nvme1n1 : 1.01 6969.14 27.22 0.00 0.00 18290.65 7815.77 36505.98 00:12:12.723 =================================================================================================================== 00:12:12.723 Total : 6969.14 27.22 0.00 0.00 18290.65 7815.77 36505.98 00:12:12.723 00:12:12.723 Latency(us) 00:12:12.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.723 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:12.723 Nvme1n1 : 1.00 197485.25 771.43 0.00 0.00 645.67 267.00 831.34 00:12:12.723 =================================================================================================================== 00:12:12.723 Total : 197485.25 771.43 0.00 0.00 645.67 267.00 831.34 00:12:12.723 00:12:12.723 Latency(us) 00:12:12.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.723 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:12.723 Nvme1n1 : 1.01 7899.08 30.86 0.00 0.00 16153.57 5704.06 20194.80 00:12:12.723 =================================================================================================================== 00:12:12.723 Total : 7899.08 30.86 0.00 0.00 16153.57 5704.06 20194.80 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2200692 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2200694 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2200698 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.312 rmmod nvme_tcp 00:12:13.312 rmmod nvme_fabrics 00:12:13.312 rmmod nvme_keyring 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2200487 ']' 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2200487 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2200487 ']' 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2200487 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2200487 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2200487' 00:12:13.312 killing process with pid 2200487 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2200487 00:12:13.312 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2200487 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.572 17:35:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.475 17:35:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.475 00:12:15.475 real 0m7.966s 00:12:15.475 user 0m20.301s 00:12:15.475 sys 0m3.419s 00:12:15.475 17:35:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.475 17:35:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:15.475 ************************************ 00:12:15.475 END TEST nvmf_bdev_io_wait 00:12:15.475 ************************************ 00:12:15.475 17:35:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.475 17:35:10 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:15.475 17:35:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.475 17:35:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.475 17:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.733 ************************************ 00:12:15.733 START TEST nvmf_queue_depth 00:12:15.733 ************************************ 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:15.733 * Looking for test storage... 00:12:15.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.733 17:35:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.734 17:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:17.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:17.630 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:17.631 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:17.631 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:17.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:17.631 00:12:17.631 --- 10.0.0.2 ping statistics --- 00:12:17.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.631 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:17.631 00:12:17.631 --- 10.0.0.1 ping statistics --- 00:12:17.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.631 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.631 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2202927 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2202927 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2202927 ']' 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.889 17:35:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:17.889 [2024-07-15 17:35:12.837486] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:17.889 [2024-07-15 17:35:12.837558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.889 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.889 [2024-07-15 17:35:12.902296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.889 [2024-07-15 17:35:13.021495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.889 [2024-07-15 17:35:13.021550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.889 [2024-07-15 17:35:13.021566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.889 [2024-07-15 17:35:13.021580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.889 [2024-07-15 17:35:13.021597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.889 [2024-07-15 17:35:13.021638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 [2024-07-15 17:35:13.812972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 Malloc0 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 [2024-07-15 17:35:13.875830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2203039 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2203039 /var/tmp/bdevperf.sock 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2203039 ']' 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:18.828 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.829 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:18.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:18.829 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.829 17:35:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:18.829 [2024-07-15 17:35:13.923092] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:18.829 [2024-07-15 17:35:13.923157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203039 ] 00:12:18.829 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.086 [2024-07-15 17:35:13.985105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.086 [2024-07-15 17:35:14.103583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.086 17:35:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.086 17:35:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:19.086 17:35:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:19.086 17:35:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.086 17:35:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:19.343 NVMe0n1 00:12:19.343 17:35:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.343 17:35:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:19.343 Running I/O for 10 seconds... 00:12:31.587 00:12:31.587 Latency(us) 00:12:31.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.587 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:31.587 Verification LBA range: start 0x0 length 0x4000 00:12:31.587 NVMe0n1 : 10.10 8493.27 33.18 0.00 0.00 120046.00 24660.95 88158.06 00:12:31.587 =================================================================================================================== 00:12:31.587 Total : 8493.27 33.18 0.00 0.00 120046.00 24660.95 88158.06 00:12:31.587 0 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2203039 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2203039 ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2203039 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2203039 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2203039' 00:12:31.587 killing process with pid 2203039 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2203039 00:12:31.587 Received shutdown signal, test time was about 10.000000 seconds 00:12:31.587 00:12:31.587 Latency(us) 00:12:31.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.587 =================================================================================================================== 00:12:31.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2203039 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.587 rmmod nvme_tcp 00:12:31.587 rmmod nvme_fabrics 00:12:31.587 rmmod nvme_keyring 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2202927 ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2202927 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2202927 ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2202927 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202927 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202927' 00:12:31.587 killing process with pid 2202927 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2202927 00:12:31.587 17:35:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2202927 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.587 17:35:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.528 17:35:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.528 00:12:32.528 real 0m16.728s 00:12:32.528 user 0m23.607s 00:12:32.528 sys 0m2.966s 00:12:32.528 17:35:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.528 17:35:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 ************************************ 00:12:32.528 END TEST nvmf_queue_depth 00:12:32.528 ************************************ 00:12:32.528 17:35:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:32.528 17:35:27 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:32.528 17:35:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.528 17:35:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.528 17:35:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.528 ************************************ 00:12:32.528 START TEST nvmf_target_multipath 00:12:32.528 ************************************ 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:32.528 * Looking for test storage... 00:12:32.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:32.528 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.529 17:35:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.437 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:12:34.438 00:12:34.438 --- 10.0.0.2 ping statistics --- 00:12:34.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.438 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:12:34.438 00:12:34.438 --- 10.0.0.1 ping statistics --- 00:12:34.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.438 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:34.438 only one NIC for nvmf test 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.438 rmmod nvme_tcp 00:12:34.438 rmmod nvme_fabrics 00:12:34.438 rmmod nvme_keyring 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.438 17:35:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.980 00:12:36.980 real 0m4.164s 00:12:36.980 user 0m0.767s 00:12:36.980 sys 0m1.391s 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.980 17:35:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:36.980 ************************************ 00:12:36.980 END TEST nvmf_target_multipath 00:12:36.980 ************************************ 00:12:36.980 17:35:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:36.980 17:35:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.980 17:35:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.980 17:35:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.980 17:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.980 ************************************ 00:12:36.980 START TEST nvmf_zcopy 00:12:36.980 ************************************ 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.980 * Looking for test storage... 00:12:36.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.980 17:35:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:12:38.886 00:12:38.886 --- 10.0.0.2 ping statistics --- 00:12:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.886 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:38.886 00:12:38.886 --- 10.0.0.1 ping statistics --- 00:12:38.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.886 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2208181 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2208181 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2208181 ']' 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.886 17:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.886 [2024-07-15 17:35:33.826910] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:38.886 [2024-07-15 17:35:33.826998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.886 [2024-07-15 17:35:33.890193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.886 [2024-07-15 17:35:33.999153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.886 [2024-07-15 17:35:33.999214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.886 [2024-07-15 17:35:33.999227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.886 [2024-07-15 17:35:33.999238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.886 [2024-07-15 17:35:33.999247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.886 [2024-07-15 17:35:33.999273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.145 [2024-07-15 17:35:34.148185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.145 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 [2024-07-15 17:35:34.164394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 malloc0 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:39.146 { 00:12:39.146 "params": { 00:12:39.146 "name": "Nvme$subsystem", 00:12:39.146 "trtype": "$TEST_TRANSPORT", 00:12:39.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:39.146 "adrfam": "ipv4", 00:12:39.146 "trsvcid": "$NVMF_PORT", 00:12:39.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:39.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:39.146 "hdgst": ${hdgst:-false}, 00:12:39.146 "ddgst": ${ddgst:-false} 00:12:39.146 }, 00:12:39.146 "method": "bdev_nvme_attach_controller" 00:12:39.146 } 00:12:39.146 EOF 00:12:39.146 )") 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:39.146 17:35:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:39.146 "params": { 00:12:39.146 "name": "Nvme1", 00:12:39.146 "trtype": "tcp", 00:12:39.146 "traddr": "10.0.0.2", 00:12:39.146 "adrfam": "ipv4", 00:12:39.146 "trsvcid": "4420", 00:12:39.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.146 "hdgst": false, 00:12:39.146 "ddgst": false 00:12:39.146 }, 00:12:39.146 "method": "bdev_nvme_attach_controller" 00:12:39.146 }' 00:12:39.146 [2024-07-15 17:35:34.251177] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:39.146 [2024-07-15 17:35:34.251259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208204 ] 00:12:39.146 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.406 [2024-07-15 17:35:34.313933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.406 [2024-07-15 17:35:34.432074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.664 Running I/O for 10 seconds... 00:12:51.880 00:12:51.880 Latency(us) 00:12:51.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:51.880 Verification LBA range: start 0x0 length 0x1000 00:12:51.880 Nvme1n1 : 10.01 5680.76 44.38 0.00 0.00 22467.09 1074.06 32428.18 00:12:51.880 =================================================================================================================== 00:12:51.880 Total : 5680.76 44.38 0.00 0.00 22467.09 1074.06 32428.18 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2209514 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.880 { 00:12:51.880 "params": { 00:12:51.880 "name": "Nvme$subsystem", 00:12:51.880 "trtype": "$TEST_TRANSPORT", 00:12:51.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.880 "adrfam": "ipv4", 00:12:51.880 "trsvcid": "$NVMF_PORT", 00:12:51.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.880 "hdgst": ${hdgst:-false}, 00:12:51.880 "ddgst": ${ddgst:-false} 00:12:51.880 }, 00:12:51.880 "method": "bdev_nvme_attach_controller" 00:12:51.880 } 00:12:51.880 EOF 00:12:51.880 )") 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:51.880 [2024-07-15 17:35:45.072572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.072618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:51.880 17:35:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.880 "params": { 00:12:51.880 "name": "Nvme1", 00:12:51.880 "trtype": "tcp", 00:12:51.880 "traddr": "10.0.0.2", 00:12:51.880 "adrfam": "ipv4", 00:12:51.880 "trsvcid": "4420", 00:12:51.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.880 "hdgst": false, 00:12:51.880 "ddgst": false 00:12:51.880 }, 00:12:51.880 "method": "bdev_nvme_attach_controller" 00:12:51.880 }' 00:12:51.880 [2024-07-15 17:35:45.080534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.080564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 [2024-07-15 17:35:45.088555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.088580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 [2024-07-15 17:35:45.096576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.096609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 [2024-07-15 17:35:45.104597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.104622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 [2024-07-15 17:35:45.112620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.112645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.880 [2024-07-15 17:35:45.113051] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:12:51.880 [2024-07-15 17:35:45.113109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209514 ] 00:12:51.880 [2024-07-15 17:35:45.120642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.880 [2024-07-15 17:35:45.120667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.128665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.128689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.136685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.136710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.144705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.144729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.881 [2024-07-15 17:35:45.152728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.152753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.160752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.160776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.168774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.168798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.176795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.176819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.180871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.881 [2024-07-15 17:35:45.184828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.184853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.192868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.192938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.200865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.200899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.208894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.208931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.216914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.216954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.224951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.224980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.232978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.233008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.240994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.241014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.249053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.249085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.257034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.257060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.265059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.265080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.273073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.273097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.281097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.281120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.289120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.289143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.297144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.297181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.303493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.881 [2024-07-15 17:35:45.305179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.305201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.313204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.313230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.321271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.321308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.329291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.329330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.337298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.337341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.345323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.345364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.353381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.353420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.361361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.361400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.369355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.369380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.377402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.377441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.385426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.385465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.393426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.393453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.401443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.401467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.409465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.409489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.417501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.417530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.425521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.425548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.433545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.433571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.441569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.441596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.449588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.449613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.457634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.457660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.465631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.881 [2024-07-15 17:35:45.465655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.881 [2024-07-15 17:35:45.473654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.473678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.481686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.481712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.489711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.489738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.497736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.497763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.505752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.505777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.513999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.514025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.521800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.521827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 Running I/O for 5 seconds... 00:12:51.882 [2024-07-15 17:35:45.529820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.529845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.544652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.544683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.556293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.556323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.568007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.568035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.579555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.579585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.591362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.591393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.605439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.605470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.616736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.616766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.630147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.630173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.640249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.640279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.651998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.652025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.663030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.663057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.673809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.673839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.684743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.684774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.697817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.697847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.708260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.708291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.720263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.720293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.731835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.731865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.745538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.745569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.756464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.756494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.767767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.767798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.779379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.779409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.790314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.790345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.802109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.802136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.813743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.813772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.825150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.825193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.836949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.836976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.848544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.848574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.860272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.860302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.871941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.871967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.883303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.883333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.894717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.894749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.906422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.906451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.917365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.917395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.929076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.929103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.940787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.940817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.952483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.952512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.964314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.964351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.976022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.976049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.882 [2024-07-15 17:35:45.987368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.882 [2024-07-15 17:35:45.987398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:45.999013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:45.999040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.010347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.010377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.021956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.021983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.033742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.033772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.045361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.045391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.056272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.056302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.067741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.067771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.080714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.080744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.091544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.091573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.103169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.103211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.114647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.114676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.127931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.127958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.138452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.138482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.150148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.150174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.161659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.161688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.172676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.172708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.184109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.184144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.195732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.195761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.206898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.206943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.217837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.217867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.228844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.228874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.239953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.239980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.251251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.251281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.264236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.264266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.274224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.274254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.286006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.286033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.297127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.297154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.308392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.308421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.321282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.321312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.332327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.332356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.343855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.343894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.355181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.355211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.366731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.366761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.378192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.378237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.389817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.389854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.401288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.401326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.412386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.412415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.425354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.425384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.435400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.435430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.447695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.447724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.458789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.458818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.471664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.471693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.481960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.481988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.883 [2024-07-15 17:35:46.493130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.883 [2024-07-15 17:35:46.493157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.506661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.506691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.516511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.516540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.527989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.528015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.538868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.538907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.550241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.550271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.561656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.561686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.573113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.573140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.584253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.584283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.595611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.595641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.606810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.606841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.618335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.618373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.629827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.629856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.640891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.640920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.651154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.651181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.662380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.662411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.673323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.673361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.684229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.684260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.695487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.695517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.706657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.706686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.719538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.719567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.730201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.730231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.741872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.741927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.753373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.753404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.764606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.764636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.778122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.778149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.789067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.789094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.800402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.800432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.811711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.811741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.823180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.823227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.835231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.835262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.846543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.846573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.860102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.860130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.871193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.871223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.882512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.882542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.894047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.894074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.905662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.905693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.916960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.916987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.928268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.928298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.939949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.939976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.951489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.951519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.964618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.964647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.975391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.975420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.987523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.987552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:46.998805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:46.998835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.884 [2024-07-15 17:35:47.010397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.884 [2024-07-15 17:35:47.010428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.022095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.022123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.034229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.034259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.045456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.045486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.057223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.057253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.069253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.069283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.080995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.081022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.091941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.091968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.103360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.103390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.114764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.114793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.126353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.126383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.137836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.137865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.149201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.149231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.160384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.160414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.171639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.171669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.183003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.183030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.194623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.194653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.206185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.206229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.217591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.217620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.229072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.229102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.240265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.240294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.251604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.251633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.263369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.263400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.145 [2024-07-15 17:35:47.274958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.145 [2024-07-15 17:35:47.274985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.286285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.286316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.297506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.297536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.310938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.310965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.320747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.320774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.332079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.332107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.343387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.343416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.354508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.354538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.368004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.368031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.378873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.378927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.390089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.390116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.401484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.401514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.412761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.412791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.423840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.423870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.435289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.435319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.446014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.446041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.457287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.457317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.470334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.470364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.480111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.480138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.403 [2024-07-15 17:35:47.491994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.403 [2024-07-15 17:35:47.492021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.404 [2024-07-15 17:35:47.503091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.404 [2024-07-15 17:35:47.503118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.404 [2024-07-15 17:35:47.514067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.404 [2024-07-15 17:35:47.514094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.404 [2024-07-15 17:35:47.527300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.404 [2024-07-15 17:35:47.527330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.404 [2024-07-15 17:35:47.537636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.404 [2024-07-15 17:35:47.537666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.549753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.549783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.561142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.561179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.574296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.574327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.584886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.584940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.596439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.596469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.607553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.607583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.619129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.619157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.630464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.630494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.641809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.641839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.653293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.653323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.664729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.664759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.675837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.675867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.686983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.687011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.700163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.700198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.710868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.710923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.722323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.722353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.733711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.733740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.745040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.745067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.756469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.756498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.767367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.767397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.778918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.778945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.664 [2024-07-15 17:35:47.790682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.664 [2024-07-15 17:35:47.790712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.802132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.802175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.813666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.813696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.826636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.826665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.836991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.837018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.848524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.848553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.860005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.860032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.873421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.873452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.883834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.883863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.895978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.896005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.907177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.907221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.918730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.918770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.930124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.930166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.941467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.941496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.952713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.952743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.964127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.964154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.975747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.975777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.987769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.987799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:47.999674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:47.999705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:48.011543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:48.011583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:48.023363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:48.023393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:48.035061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:48.035088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:48.046595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:48.046625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.923 [2024-07-15 17:35:48.058069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.923 [2024-07-15 17:35:48.058096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.069699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.069740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.081757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.081788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.093067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.093094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.104456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.104487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.116206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.116235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.127791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.127821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.139449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.139490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.151080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.151107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.162347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.162378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.174003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.174030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.185096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.185123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.196520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.196549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.208123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.208150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.219813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.219843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.231406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.231437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.242751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.242781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.255750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.255779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.266317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.266347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.278038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.278065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.289608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.289638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.302902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.302944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.181 [2024-07-15 17:35:48.313694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.181 [2024-07-15 17:35:48.313723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.325087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.325115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.337090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.337117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.348922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.348949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.360359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.360396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.371613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.371642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.383208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.383237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.394708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.394737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.406258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.406289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.419355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.419385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.430541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.430570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.441934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.441961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.453563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.453592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.465075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.465102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.476652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.476682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.488146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.488189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.499743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.499773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.511208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.511238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.524442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.524471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.534402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.534431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.439 [2024-07-15 17:35:48.546416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.439 [2024-07-15 17:35:48.546445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.440 [2024-07-15 17:35:48.557727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.440 [2024-07-15 17:35:48.557757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.440 [2024-07-15 17:35:48.569087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.440 [2024-07-15 17:35:48.569113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.580524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.580554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.592084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.592111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.604197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.604227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.615810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.615839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.627437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.627467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.639187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.639217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.650318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.650348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.661786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.661815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.698 [2024-07-15 17:35:48.673105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.698 [2024-07-15 17:35:48.673133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.684972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.684999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.696141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.696185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.707628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.707657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.719006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.719033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.730297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.730327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.741460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.741489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.752851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.752890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.764077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.764104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.775402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.775432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.786688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.786717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.798019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.798046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.809142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.809168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.820898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.820941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.699 [2024-07-15 17:35:48.832540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.699 [2024-07-15 17:35:48.832570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.844465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.844495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.856159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.856185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.867862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.867902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.878943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.878970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.890414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.890444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.901461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.901490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.912886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.912931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.924561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.924591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.935417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.935448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.946695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.946725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.957938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.957965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.969202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.969231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.980340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.980370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:48.991264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:48.991293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.002705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.002734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.014266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.014296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.025210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.025240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.038361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.038391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.048934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.048961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.060888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.060932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.072224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.072254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.959 [2024-07-15 17:35:49.083460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.959 [2024-07-15 17:35:49.083490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.094819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.094850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.106344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.106374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.118077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.118106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.129710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.129740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.141481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.141511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.152570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.152599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.164010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.164038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.175112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.175139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.186704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.186733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.198345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.198376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.210014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.210041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.221755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.221785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.233191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.233237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.244470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.244500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.255829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.255859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.267509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.267539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.278791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.278821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.290192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.290236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.303287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.303317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.313892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.313935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.325154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.325182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.338026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.338053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.220 [2024-07-15 17:35:49.347978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.220 [2024-07-15 17:35:49.348005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.359660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.359691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.370329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.370358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.381333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.381362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.392864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.392902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.403794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.403828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.414941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.414968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.426483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.426513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.437938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.437972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.448638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.448668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.459737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.459767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.471076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.471103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.482449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.482479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.493837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.493867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.505340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.505370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.517178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.517208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.528394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.528424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.539636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.539666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.550976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.551003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.562055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.562082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.575060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.575087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.584673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.584703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.596503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.596532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.482 [2024-07-15 17:35:49.607718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.482 [2024-07-15 17:35:49.607747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.619108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.619136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.630345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.630376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.641744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.641774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.652968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.653003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.664656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.664686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.675899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.675943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.687441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.687471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.699087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.699114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.710139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.710166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.723501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.723531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.734322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.734353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.745986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.746013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.757168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.757211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.768717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.768746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.779989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.780016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.793469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.793499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.804144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.804171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.815346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.815376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.826465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.826495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.837797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.837827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.849333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.849363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.860802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.860831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.742 [2024-07-15 17:35:49.872347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.742 [2024-07-15 17:35:49.872385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.883594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.883624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.894773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.894802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.906293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.906323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.917734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.917764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.928965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.929005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.940624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.940654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.952343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.952373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.964071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.964098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.975926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.975953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:49.987306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:49.987335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.000874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.000928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.010705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.010744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.021027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.021056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.033795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.033829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.043956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.043986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.055698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.055728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.071534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.071564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.082122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.082149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.093483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.093522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.105234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.105263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.116659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.116689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.000 [2024-07-15 17:35:50.127805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.000 [2024-07-15 17:35:50.127835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.139116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.139144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.150859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.150897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.163022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.163050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.174564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.174594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.186015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.186042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.199453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.199483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.210309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.210339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.222482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.222511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.233748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.233777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.245144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.245170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.256299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.256329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.267756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.267796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.279310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.279340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.290741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.290771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.303714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.303745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.313928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.313955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.325925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.325953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.337346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.337378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.289 [2024-07-15 17:35:50.348532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.289 [2024-07-15 17:35:50.348563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.290 [2024-07-15 17:35:50.360037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.290 [2024-07-15 17:35:50.360065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.290 [2024-07-15 17:35:50.371020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.290 [2024-07-15 17:35:50.371048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.290 [2024-07-15 17:35:50.382188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.290 [2024-07-15 17:35:50.382232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.290 [2024-07-15 17:35:50.395486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.290 [2024-07-15 17:35:50.395517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.405700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.405731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.417779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.417808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.429386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.429416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.440432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.440462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.451483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.451513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.463067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.463094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.474933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.474959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.485456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.485485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.496424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.496453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.507688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.507719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.518858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.518895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.532085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.532112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.542552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.542581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.550062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.550088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 00:12:55.550 Latency(us) 00:12:55.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.550 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:55.550 Nvme1n1 : 5.01 11163.67 87.22 0.00 0.00 11449.85 5218.61 22913.33 00:12:55.550 =================================================================================================================== 00:12:55.550 Total : 11163.67 87.22 0.00 0.00 11449.85 5218.61 22913.33 00:12:55.550 [2024-07-15 17:35:50.556910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.556951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.564923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.564962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.572950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.572972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.581002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.581048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.589018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.589066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.597038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.597083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.605057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.605100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.613074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.613118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.621123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.621171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.629129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.629174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.637156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.637202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.550 [2024-07-15 17:35:50.645179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.550 [2024-07-15 17:35:50.645226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.551 [2024-07-15 17:35:50.653202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.551 [2024-07-15 17:35:50.653249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.551 [2024-07-15 17:35:50.661235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.551 [2024-07-15 17:35:50.661288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.551 [2024-07-15 17:35:50.669253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.551 [2024-07-15 17:35:50.669300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.551 [2024-07-15 17:35:50.677264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.551 [2024-07-15 17:35:50.677309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.551 [2024-07-15 17:35:50.685288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.551 [2024-07-15 17:35:50.685332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.693319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.693363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.701302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.701327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.709324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.709349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.717345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.717369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.725367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.725391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.733418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.733456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.741446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.741487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.749479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.749523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.757455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.757479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.765478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.765504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.773497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.773521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.781517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.781541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.789562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.789598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.797598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.797639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.805617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.805686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.813603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.813628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.821625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.821649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 [2024-07-15 17:35:50.829647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.811 [2024-07-15 17:35:50.829671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2209514) - No such process 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2209514 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:55.811 delay0 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.811 17:35:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:55.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.069 [2024-07-15 17:35:50.987058] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:02.641 Initializing NVMe Controllers 00:13:02.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:02.641 Initialization complete. Launching workers. 00:13:02.641 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:13:02.641 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 34 00:13:02.641 success 211, unsuccess 191, failed 0 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.641 rmmod nvme_tcp 00:13:02.641 rmmod nvme_fabrics 00:13:02.641 rmmod nvme_keyring 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2208181 ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2208181 ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208181' 00:13:02.641 killing process with pid 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2208181 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.641 17:35:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.538 17:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.538 00:13:04.538 real 0m27.897s 00:13:04.538 user 0m41.225s 00:13:04.538 sys 0m8.365s 00:13:04.538 17:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.538 17:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:04.538 ************************************ 00:13:04.539 END TEST nvmf_zcopy 00:13:04.539 ************************************ 00:13:04.539 17:35:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.539 17:35:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.539 17:35:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.539 17:35:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.539 17:35:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.539 ************************************ 00:13:04.539 START TEST nvmf_nmic 00:13:04.539 ************************************ 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:04.539 * Looking for test storage... 00:13:04.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.539 17:35:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.437 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.437 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:13:06.695 00:13:06.695 --- 10.0.0.2 ping statistics --- 00:13:06.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.695 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:06.695 00:13:06.695 --- 10.0.0.1 ping statistics --- 00:13:06.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.695 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2212901 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2212901 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2212901 ']' 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.695 17:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.695 [2024-07-15 17:36:01.738810] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:13:06.695 [2024-07-15 17:36:01.738914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.695 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.695 [2024-07-15 17:36:01.807200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.953 [2024-07-15 17:36:01.917592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.953 [2024-07-15 17:36:01.917648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.953 [2024-07-15 17:36:01.917672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.953 [2024-07-15 17:36:01.917686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.953 [2024-07-15 17:36:01.917695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.953 [2024-07-15 17:36:01.917776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.953 [2024-07-15 17:36:01.917840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.953 [2024-07-15 17:36:01.917907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.953 [2024-07-15 17:36:01.917910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.953 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.954 [2024-07-15 17:36:02.060548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.954 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.954 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:06.954 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.954 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 Malloc0 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 [2024-07-15 17:36:02.112474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.214 test case1: single bdev can't be used in multiple subsystems 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 [2024-07-15 17:36:02.136339] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.214 [2024-07-15 17:36:02.136367] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.214 [2024-07-15 17:36:02.136390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.214 request: 00:13:07.214 { 00:13:07.214 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.214 "namespace": { 00:13:07.214 "bdev_name": "Malloc0", 00:13:07.214 "no_auto_visible": false 00:13:07.214 }, 00:13:07.214 "method": "nvmf_subsystem_add_ns", 00:13:07.214 "req_id": 1 00:13:07.214 } 00:13:07.214 Got JSON-RPC error response 00:13:07.214 response: 00:13:07.214 { 00:13:07.214 "code": -32602, 00:13:07.214 "message": "Invalid parameters" 00:13:07.214 } 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.214 Adding namespace failed - expected result. 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.214 test case2: host connect to nvmf target in multiple paths 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 [2024-07-15 17:36:02.144450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.214 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.784 17:36:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:08.722 17:36:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.722 17:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.722 17:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.722 17:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:08.722 17:36:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:10.628 17:36:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:10.628 [global] 00:13:10.628 thread=1 00:13:10.628 invalidate=1 00:13:10.628 rw=write 00:13:10.628 time_based=1 00:13:10.628 runtime=1 00:13:10.628 ioengine=libaio 00:13:10.628 direct=1 00:13:10.628 bs=4096 00:13:10.628 iodepth=1 00:13:10.628 norandommap=0 00:13:10.628 numjobs=1 00:13:10.628 00:13:10.628 verify_dump=1 00:13:10.628 verify_backlog=512 00:13:10.628 verify_state_save=0 00:13:10.628 do_verify=1 00:13:10.628 verify=crc32c-intel 00:13:10.628 [job0] 00:13:10.628 filename=/dev/nvme0n1 00:13:10.628 Could not set queue depth (nvme0n1) 00:13:10.628 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.628 fio-3.35 00:13:10.628 Starting 1 thread 00:13:12.008 00:13:12.008 job0: (groupid=0, jobs=1): err= 0: pid=2213520: Mon Jul 15 17:36:06 2024 00:13:12.008 read: IOPS=151, BW=608KiB/s (622kB/s)(620KiB/1020msec) 00:13:12.008 slat (nsec): min=5208, max=54837, avg=16754.63, stdev=9772.66 00:13:12.008 clat (usec): min=322, max=42384, avg=5673.96, stdev=13778.65 00:13:12.008 lat (usec): min=329, max=42417, avg=5690.72, stdev=13781.15 00:13:12.008 clat percentiles (usec): 00:13:12.008 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:13:12.008 | 30.00th=[ 367], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 404], 00:13:12.008 | 70.00th=[ 420], 80.00th=[ 449], 90.00th=[41157], 95.00th=[41157], 00:13:12.008 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:12.008 | 99.99th=[42206] 00:13:12.008 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:12.008 slat (nsec): min=12682, max=45224, avg=20382.65, stdev=7046.40 00:13:12.008 clat (usec): min=212, max=424, avg=240.83, stdev=27.95 00:13:12.008 lat (usec): min=230, max=456, avg=261.21, stdev=32.32 00:13:12.008 clat percentiles (usec): 00:13:12.008 | 1.00th=[ 217], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 225], 00:13:12.008 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 235], 00:13:12.008 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 310], 00:13:12.008 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 424], 99.95th=[ 424], 00:13:12.008 | 99.99th=[ 424] 00:13:12.008 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:12.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:12.008 lat (usec) : 250=62.37%, 500=34.33%, 750=0.30% 00:13:12.008 lat (msec) : 50=3.00% 00:13:12.008 cpu : usr=0.98%, sys=0.88%, ctx=667, majf=0, minf=2 00:13:12.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:12.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.008 issued rwts: total=155,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:12.008 00:13:12.008 Run status group 0 (all jobs): 00:13:12.008 READ: bw=608KiB/s (622kB/s), 608KiB/s-608KiB/s (622kB/s-622kB/s), io=620KiB (635kB), run=1020-1020msec 00:13:12.008 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:13:12.008 00:13:12.008 Disk stats (read/write): 00:13:12.008 nvme0n1: ios=202/512, merge=0/0, ticks=785/122, in_queue=907, util=92.08% 00:13:12.008 17:36:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.008 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.008 rmmod nvme_tcp 00:13:12.008 rmmod nvme_fabrics 00:13:12.008 rmmod nvme_keyring 00:13:12.266 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.266 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:12.266 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:12.266 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2212901 ']' 00:13:12.266 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2212901 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2212901 ']' 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2212901 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2212901 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2212901' 00:13:12.267 killing process with pid 2212901 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2212901 00:13:12.267 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2212901 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.526 17:36:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.428 17:36:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.428 00:13:14.428 real 0m9.965s 00:13:14.428 user 0m22.884s 00:13:14.428 sys 0m2.271s 00:13:14.428 17:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.428 17:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.428 ************************************ 00:13:14.428 END TEST nvmf_nmic 00:13:14.428 ************************************ 00:13:14.428 17:36:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.428 17:36:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.428 17:36:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.428 17:36:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.428 17:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.686 ************************************ 00:13:14.686 START TEST nvmf_fio_target 00:13:14.686 ************************************ 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.686 * Looking for test storage... 00:13:14.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.686 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.687 17:36:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.586 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:16.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:16.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:16.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:16.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:13:16.587 00:13:16.587 --- 10.0.0.2 ping statistics --- 00:13:16.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.587 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:16.587 00:13:16.587 --- 10.0.0.1 ping statistics --- 00:13:16.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.587 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.587 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2216097 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2216097 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2216097 ']' 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.880 17:36:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.880 [2024-07-15 17:36:11.786815] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:13:16.880 [2024-07-15 17:36:11.786911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.880 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.881 [2024-07-15 17:36:11.860864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.881 [2024-07-15 17:36:11.982263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.881 [2024-07-15 17:36:11.982323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.881 [2024-07-15 17:36:11.982339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.881 [2024-07-15 17:36:11.982352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.881 [2024-07-15 17:36:11.982364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.881 [2024-07-15 17:36:11.982442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.881 [2024-07-15 17:36:11.982503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.881 [2024-07-15 17:36:11.982556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.881 [2024-07-15 17:36:11.982559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.139 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.397 [2024-07-15 17:36:12.373602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.397 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.656 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:17.656 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.913 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:17.913 17:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.171 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:18.171 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.428 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:18.428 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:18.684 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.941 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:18.941 17:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.198 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:19.198 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.455 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:19.455 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:19.712 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.971 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.971 17:36:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.228 17:36:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.228 17:36:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.486 17:36:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.742 [2024-07-15 17:36:15.705943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.742 17:36:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:20.998 17:36:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:21.254 17:36:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:21.818 17:36:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:24.341 17:36:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:24.341 [global] 00:13:24.341 thread=1 00:13:24.341 invalidate=1 00:13:24.341 rw=write 00:13:24.341 time_based=1 00:13:24.341 runtime=1 00:13:24.341 ioengine=libaio 00:13:24.341 direct=1 00:13:24.341 bs=4096 00:13:24.341 iodepth=1 00:13:24.341 norandommap=0 00:13:24.341 numjobs=1 00:13:24.341 00:13:24.341 verify_dump=1 00:13:24.341 verify_backlog=512 00:13:24.341 verify_state_save=0 00:13:24.341 do_verify=1 00:13:24.341 verify=crc32c-intel 00:13:24.341 [job0] 00:13:24.341 filename=/dev/nvme0n1 00:13:24.341 [job1] 00:13:24.341 filename=/dev/nvme0n2 00:13:24.341 [job2] 00:13:24.341 filename=/dev/nvme0n3 00:13:24.341 [job3] 00:13:24.341 filename=/dev/nvme0n4 00:13:24.341 Could not set queue depth (nvme0n1) 00:13:24.341 Could not set queue depth (nvme0n2) 00:13:24.341 Could not set queue depth (nvme0n3) 00:13:24.341 Could not set queue depth (nvme0n4) 00:13:24.341 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.341 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.341 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.341 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.341 fio-3.35 00:13:24.341 Starting 4 threads 00:13:25.271 00:13:25.271 job0: (groupid=0, jobs=1): err= 0: pid=2217165: Mon Jul 15 17:36:20 2024 00:13:25.271 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:13:25.271 slat (nsec): min=8577, max=33335, avg=19196.19, stdev=8076.50 00:13:25.271 clat (usec): min=40893, max=42142, avg=41460.60, stdev=533.99 00:13:25.271 lat (usec): min=40908, max=42151, avg=41479.79, stdev=537.80 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:25.271 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:25.271 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:25.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.271 | 99.99th=[42206] 00:13:25.271 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:13:25.271 slat (nsec): min=7690, max=62003, avg=18432.63, stdev=8812.12 00:13:25.271 clat (usec): min=201, max=4443, avg=257.09, stdev=189.14 00:13:25.271 lat (usec): min=211, max=4463, avg=275.52, stdev=189.92 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:13:25.271 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:13:25.271 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 334], 00:13:25.271 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 4424], 99.95th=[ 4424], 00:13:25.271 | 99.99th=[ 4424] 00:13:25.271 bw ( KiB/s): min= 4087, max= 4087, per=51.69%, avg=4087.00, stdev= 0.00, samples=1 00:13:25.271 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:13:25.271 lat (usec) : 250=61.91%, 500=33.96% 00:13:25.271 lat (msec) : 10=0.19%, 50=3.94% 00:13:25.271 cpu : usr=0.59%, sys=0.79%, ctx=534, majf=0, minf=1 00:13:25.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.271 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.271 job1: (groupid=0, jobs=1): err= 0: pid=2217166: Mon Jul 15 17:36:20 2024 00:13:25.271 read: IOPS=387, BW=1549KiB/s (1586kB/s)(1552KiB/1002msec) 00:13:25.271 slat (nsec): min=6839, max=35242, avg=8149.27, stdev=3204.65 00:13:25.271 clat (usec): min=351, max=42445, avg=2085.97, stdev=8255.94 00:13:25.271 lat (usec): min=358, max=42454, avg=2094.12, stdev=8258.36 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 367], 00:13:25.271 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 375], 60.00th=[ 379], 00:13:25.271 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 392], 95.00th=[ 429], 00:13:25.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.271 | 99.99th=[42206] 00:13:25.271 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:25.271 slat (nsec): min=8915, max=73677, avg=24429.05, stdev=12359.81 00:13:25.271 clat (usec): min=208, max=3285, avg=336.17, stdev=237.75 00:13:25.271 lat (usec): min=217, max=3311, avg=360.60, stdev=239.33 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 260], 00:13:25.271 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 318], 00:13:25.271 | 70.00th=[ 338], 80.00th=[ 371], 90.00th=[ 424], 95.00th=[ 474], 00:13:25.271 | 99.00th=[ 537], 99.50th=[ 2933], 99.90th=[ 3294], 99.95th=[ 3294], 00:13:25.271 | 99.99th=[ 3294] 00:13:25.271 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.271 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.271 lat (usec) : 250=8.44%, 500=88.11%, 750=1.22% 00:13:25.271 lat (msec) : 4=0.44%, 50=1.78% 00:13:25.271 cpu : usr=1.20%, sys=1.90%, ctx=900, majf=0, minf=2 00:13:25.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.271 issued rwts: total=388,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.271 job2: (groupid=0, jobs=1): err= 0: pid=2217167: Mon Jul 15 17:36:20 2024 00:13:25.271 read: IOPS=19, BW=77.2KiB/s (79.1kB/s)(80.0KiB/1036msec) 00:13:25.271 slat (nsec): min=8859, max=37258, avg=20414.20, stdev=8633.28 00:13:25.271 clat (usec): min=40607, max=42014, avg=41353.88, stdev=529.07 00:13:25.271 lat (usec): min=40616, max=42031, avg=41374.29, stdev=531.21 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:25.271 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.271 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:25.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.271 | 99.99th=[42206] 00:13:25.271 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:13:25.271 slat (usec): min=9, max=38421, avg=152.33, stdev=2073.85 00:13:25.271 clat (usec): min=193, max=428, avg=248.67, stdev=35.24 00:13:25.271 lat (usec): min=202, max=38717, avg=401.00, stdev=2077.62 00:13:25.271 clat percentiles (usec): 00:13:25.271 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:13:25.271 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:13:25.271 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 314], 00:13:25.271 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 429], 99.95th=[ 429], 00:13:25.271 | 99.99th=[ 429] 00:13:25.271 bw ( KiB/s): min= 4087, max= 4087, per=51.69%, avg=4087.00, stdev= 0.00, samples=1 00:13:25.271 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:13:25.271 lat (usec) : 250=54.32%, 500=41.92% 00:13:25.271 lat (msec) : 50=3.76% 00:13:25.271 cpu : usr=1.16%, sys=0.68%, ctx=538, majf=0, minf=1 00:13:25.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.272 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.272 job3: (groupid=0, jobs=1): err= 0: pid=2217168: Mon Jul 15 17:36:20 2024 00:13:25.272 read: IOPS=52, BW=212KiB/s (217kB/s)(216KiB/1020msec) 00:13:25.272 slat (nsec): min=7312, max=77990, avg=19561.65, stdev=10641.04 00:13:25.272 clat (usec): min=365, max=42101, avg=15648.14, stdev=20034.46 00:13:25.272 lat (usec): min=378, max=42120, avg=15667.70, stdev=20034.67 00:13:25.272 clat percentiles (usec): 00:13:25.272 | 1.00th=[ 367], 5.00th=[ 392], 10.00th=[ 396], 20.00th=[ 416], 00:13:25.272 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 537], 00:13:25.272 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:25.272 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.272 | 99.99th=[42206] 00:13:25.272 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:25.272 slat (nsec): min=8553, max=82295, avg=26490.35, stdev=12944.13 00:13:25.272 clat (usec): min=209, max=1278, avg=305.37, stdev=75.31 00:13:25.272 lat (usec): min=234, max=1318, avg=331.86, stdev=76.20 00:13:25.272 clat percentiles (usec): 00:13:25.272 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 249], 00:13:25.272 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 310], 00:13:25.272 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 420], 00:13:25.272 | 99.00th=[ 502], 99.50th=[ 570], 99.90th=[ 1287], 99.95th=[ 1287], 00:13:25.272 | 99.99th=[ 1287] 00:13:25.272 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.272 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.272 lat (usec) : 250=18.55%, 500=76.50%, 750=1.24% 00:13:25.272 lat (msec) : 2=0.18%, 50=3.53% 00:13:25.272 cpu : usr=0.69%, sys=1.37%, ctx=566, majf=0, minf=1 00:13:25.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.272 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.272 00:13:25.272 Run status group 0 (all jobs): 00:13:25.272 READ: bw=1865KiB/s (1910kB/s), 77.2KiB/s-1549KiB/s (79.1kB/s-1586kB/s), io=1932KiB (1978kB), run=1002-1036msec 00:13:25.272 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2044KiB/s (2024kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1036msec 00:13:25.272 00:13:25.272 Disk stats (read/write): 00:13:25.272 nvme0n1: ios=67/512, merge=0/0, ticks=736/125, in_queue=861, util=87.07% 00:13:25.272 nvme0n2: ios=407/512, merge=0/0, ticks=1001/137, in_queue=1138, util=91.45% 00:13:25.272 nvme0n3: ios=45/512, merge=0/0, ticks=905/122, in_queue=1027, util=98.85% 00:13:25.272 nvme0n4: ios=54/512, merge=0/0, ticks=1017/129, in_queue=1146, util=91.24% 00:13:25.272 17:36:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:25.272 [global] 00:13:25.272 thread=1 00:13:25.272 invalidate=1 00:13:25.272 rw=randwrite 00:13:25.272 time_based=1 00:13:25.272 runtime=1 00:13:25.272 ioengine=libaio 00:13:25.272 direct=1 00:13:25.272 bs=4096 00:13:25.272 iodepth=1 00:13:25.272 norandommap=0 00:13:25.272 numjobs=1 00:13:25.272 00:13:25.272 verify_dump=1 00:13:25.272 verify_backlog=512 00:13:25.272 verify_state_save=0 00:13:25.272 do_verify=1 00:13:25.272 verify=crc32c-intel 00:13:25.272 [job0] 00:13:25.272 filename=/dev/nvme0n1 00:13:25.272 [job1] 00:13:25.272 filename=/dev/nvme0n2 00:13:25.272 [job2] 00:13:25.272 filename=/dev/nvme0n3 00:13:25.272 [job3] 00:13:25.272 filename=/dev/nvme0n4 00:13:25.528 Could not set queue depth (nvme0n1) 00:13:25.528 Could not set queue depth (nvme0n2) 00:13:25.528 Could not set queue depth (nvme0n3) 00:13:25.528 Could not set queue depth (nvme0n4) 00:13:25.528 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.528 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.528 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.528 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:25.528 fio-3.35 00:13:25.528 Starting 4 threads 00:13:26.898 00:13:26.898 job0: (groupid=0, jobs=1): err= 0: pid=2217400: Mon Jul 15 17:36:21 2024 00:13:26.898 read: IOPS=105, BW=421KiB/s (431kB/s)(424KiB/1007msec) 00:13:26.898 slat (nsec): min=7310, max=36065, avg=13253.83, stdev=3741.17 00:13:26.898 clat (usec): min=404, max=41166, avg=7748.37, stdev=15601.92 00:13:26.898 lat (usec): min=412, max=41202, avg=7761.63, stdev=15602.33 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[ 404], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 486], 00:13:26.898 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 502], 00:13:26.898 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[41157], 95.00th=[41157], 00:13:26.898 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:26.898 | 99.99th=[41157] 00:13:26.898 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:13:26.898 slat (nsec): min=8286, max=55102, avg=17222.08, stdev=7136.37 00:13:26.898 clat (usec): min=189, max=6106, avg=336.40, stdev=290.48 00:13:26.898 lat (usec): min=204, max=6125, avg=353.62, stdev=291.63 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:13:26.898 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:13:26.898 | 70.00th=[ 293], 80.00th=[ 469], 90.00th=[ 578], 95.00th=[ 619], 00:13:26.898 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 6128], 99.95th=[ 6128], 00:13:26.898 | 99.99th=[ 6128] 00:13:26.898 bw ( KiB/s): min= 4096, max= 4096, per=23.17%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.898 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.898 lat (usec) : 250=29.94%, 500=47.25%, 750=19.58% 00:13:26.898 lat (msec) : 10=0.16%, 50=3.07% 00:13:26.898 cpu : usr=0.99%, sys=0.89%, ctx=618, majf=0, minf=1 00:13:26.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 issued rwts: total=106,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.898 job1: (groupid=0, jobs=1): err= 0: pid=2217401: Mon Jul 15 17:36:21 2024 00:13:26.898 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:26.898 slat (nsec): min=5284, max=50430, avg=10515.51, stdev=5612.24 00:13:26.898 clat (usec): min=276, max=1157, avg=334.40, stdev=52.76 00:13:26.898 lat (usec): min=282, max=1165, avg=344.92, stdev=55.38 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:13:26.898 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 334], 00:13:26.898 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 433], 00:13:26.898 | 99.00th=[ 529], 99.50th=[ 594], 99.90th=[ 660], 99.95th=[ 1156], 00:13:26.898 | 99.99th=[ 1156] 00:13:26.898 write: IOPS=1968, BW=7872KiB/s (8061kB/s)(7880KiB/1001msec); 0 zone resets 00:13:26.898 slat (nsec): min=6964, max=62692, avg=13227.91, stdev=7669.15 00:13:26.898 clat (usec): min=178, max=473, avg=219.35, stdev=34.63 00:13:26.898 lat (usec): min=186, max=511, avg=232.58, stdev=39.63 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:13:26.898 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:13:26.898 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 281], 00:13:26.898 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 469], 99.95th=[ 474], 00:13:26.898 | 99.99th=[ 474] 00:13:26.898 bw ( KiB/s): min= 8192, max= 8192, per=46.34%, avg=8192.00, stdev= 0.00, samples=1 00:13:26.898 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:26.898 lat (usec) : 250=49.60%, 500=49.66%, 750=0.71% 00:13:26.898 lat (msec) : 2=0.03% 00:13:26.898 cpu : usr=2.80%, sys=5.90%, ctx=3507, majf=0, minf=1 00:13:26.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 issued rwts: total=1536,1970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.898 job2: (groupid=0, jobs=1): err= 0: pid=2217402: Mon Jul 15 17:36:21 2024 00:13:26.898 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:13:26.898 slat (nsec): min=12007, max=33770, avg=19624.19, stdev=9146.06 00:13:26.898 clat (usec): min=40781, max=41881, avg=41017.97, stdev=214.88 00:13:26.898 lat (usec): min=40814, max=41915, avg=41037.59, stdev=217.49 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:26.898 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:26.898 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:26.898 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:26.898 | 99.99th=[41681] 00:13:26.898 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:13:26.898 slat (nsec): min=7217, max=62725, avg=18211.93, stdev=8321.63 00:13:26.898 clat (usec): min=206, max=1577, avg=295.02, stdev=93.05 00:13:26.898 lat (usec): min=215, max=1595, avg=313.23, stdev=93.30 00:13:26.898 clat percentiles (usec): 00:13:26.898 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 239], 00:13:26.898 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 285], 60.00th=[ 302], 00:13:26.898 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 396], 00:13:26.898 | 99.00th=[ 562], 99.50th=[ 947], 99.90th=[ 1582], 99.95th=[ 1582], 00:13:26.898 | 99.99th=[ 1582] 00:13:26.898 bw ( KiB/s): min= 4096, max= 4096, per=23.17%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.898 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.898 lat (usec) : 250=32.08%, 500=62.85%, 750=0.56%, 1000=0.38% 00:13:26.898 lat (msec) : 2=0.19%, 50=3.94% 00:13:26.898 cpu : usr=0.49%, sys=0.88%, ctx=533, majf=0, minf=1 00:13:26.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.898 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.898 job3: (groupid=0, jobs=1): err= 0: pid=2217403: Mon Jul 15 17:36:21 2024 00:13:26.899 read: IOPS=1332, BW=5331KiB/s (5459kB/s)(5336KiB/1001msec) 00:13:26.899 slat (nsec): min=4630, max=64855, avg=16295.77, stdev=9538.74 00:13:26.899 clat (usec): min=333, max=34769, avg=447.04, stdev=941.07 00:13:26.899 lat (usec): min=341, max=34795, avg=463.33, stdev=941.43 00:13:26.899 clat percentiles (usec): 00:13:26.899 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 396], 00:13:26.899 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:13:26.899 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 486], 00:13:26.899 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 619], 99.95th=[34866], 00:13:26.899 | 99.99th=[34866] 00:13:26.899 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:26.899 slat (nsec): min=5658, max=55018, avg=13034.87, stdev=7286.58 00:13:26.899 clat (usec): min=186, max=536, avg=227.74, stdev=35.00 00:13:26.899 lat (usec): min=192, max=552, avg=240.77, stdev=38.76 00:13:26.899 clat percentiles (usec): 00:13:26.899 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:13:26.899 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:13:26.899 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 302], 00:13:26.899 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 461], 99.95th=[ 537], 00:13:26.899 | 99.99th=[ 537] 00:13:26.899 bw ( KiB/s): min= 7792, max= 7792, per=44.08%, avg=7792.00, stdev= 0.00, samples=1 00:13:26.899 iops : min= 1948, max= 1948, avg=1948.00, stdev= 0.00, samples=1 00:13:26.899 lat (usec) : 250=44.29%, 500=54.49%, 750=1.18% 00:13:26.899 lat (msec) : 50=0.03% 00:13:26.899 cpu : usr=2.90%, sys=3.80%, ctx=2870, majf=0, minf=2 00:13:26.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.899 issued rwts: total=1334,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.899 00:13:26.899 Run status group 0 (all jobs): 00:13:26.899 READ: bw=11.4MiB/s (12.0MB/s), 82.0KiB/s-6138KiB/s (83.9kB/s-6285kB/s), io=11.7MiB (12.3MB), run=1001-1025msec 00:13:26.899 WRITE: bw=17.3MiB/s (18.1MB/s), 1998KiB/s-7872KiB/s (2046kB/s-8061kB/s), io=17.7MiB (18.6MB), run=1001-1025msec 00:13:26.899 00:13:26.899 Disk stats (read/write): 00:13:26.899 nvme0n1: ios=152/512, merge=0/0, ticks=689/168, in_queue=857, util=87.17% 00:13:26.899 nvme0n2: ios=1466/1536, merge=0/0, ticks=1461/310, in_queue=1771, util=99.39% 00:13:26.899 nvme0n3: ios=16/512, merge=0/0, ticks=656/146, in_queue=802, util=88.94% 00:13:26.899 nvme0n4: ios=1024/1509, merge=0/0, ticks=463/326, in_queue=789, util=89.68% 00:13:26.899 17:36:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:26.899 [global] 00:13:26.899 thread=1 00:13:26.899 invalidate=1 00:13:26.899 rw=write 00:13:26.899 time_based=1 00:13:26.899 runtime=1 00:13:26.899 ioengine=libaio 00:13:26.899 direct=1 00:13:26.899 bs=4096 00:13:26.899 iodepth=128 00:13:26.899 norandommap=0 00:13:26.899 numjobs=1 00:13:26.899 00:13:26.899 verify_dump=1 00:13:26.899 verify_backlog=512 00:13:26.899 verify_state_save=0 00:13:26.899 do_verify=1 00:13:26.899 verify=crc32c-intel 00:13:26.899 [job0] 00:13:26.899 filename=/dev/nvme0n1 00:13:26.899 [job1] 00:13:26.899 filename=/dev/nvme0n2 00:13:26.899 [job2] 00:13:26.899 filename=/dev/nvme0n3 00:13:26.899 [job3] 00:13:26.899 filename=/dev/nvme0n4 00:13:26.899 Could not set queue depth (nvme0n1) 00:13:26.899 Could not set queue depth (nvme0n2) 00:13:26.899 Could not set queue depth (nvme0n3) 00:13:26.899 Could not set queue depth (nvme0n4) 00:13:27.156 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.156 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.156 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.156 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.156 fio-3.35 00:13:27.156 Starting 4 threads 00:13:28.530 00:13:28.530 job0: (groupid=0, jobs=1): err= 0: pid=2217730: Mon Jul 15 17:36:23 2024 00:13:28.530 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1002msec) 00:13:28.530 slat (usec): min=2, max=7671, avg=138.94, stdev=718.04 00:13:28.530 clat (usec): min=835, max=33546, avg=17440.62, stdev=7076.18 00:13:28.530 lat (usec): min=1363, max=33560, avg=17579.56, stdev=7096.11 00:13:28.530 clat percentiles (usec): 00:13:28.530 | 1.00th=[ 2376], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11731], 00:13:28.530 | 30.00th=[12518], 40.00th=[12780], 50.00th=[15401], 60.00th=[19006], 00:13:28.530 | 70.00th=[20841], 80.00th=[23462], 90.00th=[28967], 95.00th=[31327], 00:13:28.530 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:13:28.530 | 99.99th=[33424] 00:13:28.530 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:13:28.530 slat (usec): min=3, max=10682, avg=134.92, stdev=623.27 00:13:28.530 clat (usec): min=1543, max=102448, avg=18343.64, stdev=14000.43 00:13:28.530 lat (usec): min=1552, max=102454, avg=18478.56, stdev=14072.51 00:13:28.530 clat percentiles (msec): 00:13:28.531 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:13:28.531 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 16], 00:13:28.531 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 26], 95.00th=[ 39], 00:13:28.531 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 103], 00:13:28.531 | 99.99th=[ 103] 00:13:28.531 bw ( KiB/s): min=13656, max=15016, per=21.91%, avg=14336.00, stdev=961.67, samples=2 00:13:28.531 iops : min= 3414, max= 3754, avg=3584.00, stdev=240.42, samples=2 00:13:28.531 lat (usec) : 1000=0.01% 00:13:28.531 lat (msec) : 2=0.37%, 4=0.56%, 10=4.32%, 20=63.01%, 50=29.71% 00:13:28.531 lat (msec) : 100=1.80%, 250=0.21% 00:13:28.531 cpu : usr=3.30%, sys=4.50%, ctx=492, majf=0, minf=1 00:13:28.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:28.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.531 issued rwts: total=3521,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.531 job1: (groupid=0, jobs=1): err= 0: pid=2217746: Mon Jul 15 17:36:23 2024 00:13:28.531 read: IOPS=3636, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1003msec) 00:13:28.531 slat (usec): min=2, max=11578, avg=140.00, stdev=825.73 00:13:28.531 clat (usec): min=944, max=42997, avg=17140.78, stdev=7590.30 00:13:28.531 lat (usec): min=4498, max=43003, avg=17280.78, stdev=7633.91 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 5932], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11076], 00:13:28.531 | 30.00th=[11338], 40.00th=[11863], 50.00th=[13173], 60.00th=[17171], 00:13:28.531 | 70.00th=[20579], 80.00th=[25035], 90.00th=[29754], 95.00th=[30802], 00:13:28.531 | 99.00th=[36439], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:13:28.531 | 99.99th=[43254] 00:13:28.531 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:13:28.531 slat (usec): min=4, max=8479, avg=111.35, stdev=502.41 00:13:28.531 clat (usec): min=2986, max=42997, avg=15744.64, stdev=6382.45 00:13:28.531 lat (usec): min=2992, max=43006, avg=15855.99, stdev=6413.31 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 4293], 5.00th=[ 7373], 10.00th=[ 9896], 20.00th=[11338], 00:13:28.531 | 30.00th=[11731], 40.00th=[12649], 50.00th=[15008], 60.00th=[15533], 00:13:28.531 | 70.00th=[16909], 80.00th=[21103], 90.00th=[23725], 95.00th=[29230], 00:13:28.531 | 99.00th=[36439], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:13:28.531 | 99.99th=[43254] 00:13:28.531 bw ( KiB/s): min=11784, max=20464, per=24.65%, avg=16124.00, stdev=6137.69, samples=2 00:13:28.531 iops : min= 2946, max= 5116, avg=4031.00, stdev=1534.42, samples=2 00:13:28.531 lat (usec) : 1000=0.01% 00:13:28.531 lat (msec) : 4=0.36%, 10=8.77%, 20=64.06%, 50=26.80% 00:13:28.531 cpu : usr=5.39%, sys=5.69%, ctx=455, majf=0, minf=1 00:13:28.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:28.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.531 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.531 job2: (groupid=0, jobs=1): err= 0: pid=2217747: Mon Jul 15 17:36:23 2024 00:13:28.531 read: IOPS=4343, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1003msec) 00:13:28.531 slat (usec): min=2, max=7705, avg=109.77, stdev=612.39 00:13:28.531 clat (usec): min=1337, max=27756, avg=14223.42, stdev=3062.01 00:13:28.531 lat (usec): min=3745, max=27787, avg=14333.19, stdev=3089.28 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[10945], 20.00th=[12256], 00:13:28.531 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14484], 00:13:28.531 | 70.00th=[15008], 80.00th=[15926], 90.00th=[19268], 95.00th=[20317], 00:13:28.531 | 99.00th=[23462], 99.50th=[23725], 99.90th=[23725], 99.95th=[25297], 00:13:28.531 | 99.99th=[27657] 00:13:28.531 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:13:28.531 slat (usec): min=3, max=10167, avg=103.40, stdev=528.99 00:13:28.531 clat (usec): min=3161, max=26392, avg=14040.77, stdev=3798.82 00:13:28.531 lat (usec): min=3174, max=26411, avg=14144.16, stdev=3812.23 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 7111], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11731], 00:13:28.531 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:13:28.531 | 70.00th=[13960], 80.00th=[16188], 90.00th=[21103], 95.00th=[22938], 00:13:28.531 | 99.00th=[23987], 99.50th=[24511], 99.90th=[26346], 99.95th=[26346], 00:13:28.531 | 99.99th=[26346] 00:13:28.531 bw ( KiB/s): min=16888, max=19976, per=28.17%, avg=18432.00, stdev=2183.55, samples=2 00:13:28.531 iops : min= 4222, max= 4994, avg=4608.00, stdev=545.89, samples=2 00:13:28.531 lat (msec) : 2=0.01%, 4=0.23%, 10=5.81%, 20=85.41%, 50=8.53% 00:13:28.531 cpu : usr=4.19%, sys=7.49%, ctx=497, majf=0, minf=1 00:13:28.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:28.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.531 issued rwts: total=4357,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.531 job3: (groupid=0, jobs=1): err= 0: pid=2217748: Mon Jul 15 17:36:23 2024 00:13:28.531 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:28.531 slat (usec): min=2, max=24206, avg=115.12, stdev=778.34 00:13:28.531 clat (usec): min=4347, max=59109, avg=15223.28, stdev=6969.92 00:13:28.531 lat (usec): min=4353, max=59123, avg=15338.40, stdev=7014.14 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 6783], 5.00th=[10159], 10.00th=[11076], 20.00th=[12125], 00:13:28.531 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13698], 00:13:28.531 | 70.00th=[14615], 80.00th=[16319], 90.00th=[20841], 95.00th=[25560], 00:13:28.531 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:13:28.531 | 99.99th=[58983] 00:13:28.531 write: IOPS=4104, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1003msec); 0 zone resets 00:13:28.531 slat (usec): min=3, max=20874, avg=119.22, stdev=910.13 00:13:28.531 clat (usec): min=1414, max=72305, avg=15749.05, stdev=9250.86 00:13:28.531 lat (usec): min=3951, max=72337, avg=15868.26, stdev=9330.19 00:13:28.531 clat percentiles (usec): 00:13:28.531 | 1.00th=[ 5997], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[11863], 00:13:28.531 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:13:28.531 | 70.00th=[14222], 80.00th=[17433], 90.00th=[21365], 95.00th=[43254], 00:13:28.531 | 99.00th=[56361], 99.50th=[56886], 99.90th=[56886], 99.95th=[61604], 00:13:28.531 | 99.99th=[71828] 00:13:28.531 bw ( KiB/s): min=16384, max=16384, per=25.04%, avg=16384.00, stdev= 0.00, samples=2 00:13:28.531 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:28.531 lat (msec) : 2=0.01%, 4=0.05%, 10=6.00%, 20=82.13%, 50=10.09% 00:13:28.531 lat (msec) : 100=1.72% 00:13:28.531 cpu : usr=4.89%, sys=5.59%, ctx=368, majf=0, minf=1 00:13:28.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:28.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.531 issued rwts: total=4096,4117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.531 00:13:28.531 Run status group 0 (all jobs): 00:13:28.531 READ: bw=60.8MiB/s (63.8MB/s), 13.7MiB/s-17.0MiB/s (14.4MB/s-17.8MB/s), io=61.0MiB (64.0MB), run=1002-1003msec 00:13:28.531 WRITE: bw=63.9MiB/s (67.0MB/s), 14.0MiB/s-17.9MiB/s (14.7MB/s-18.8MB/s), io=64.1MiB (67.2MB), run=1002-1003msec 00:13:28.531 00:13:28.531 Disk stats (read/write): 00:13:28.531 nvme0n1: ios=3122/3089, merge=0/0, ticks=13633/28699, in_queue=42332, util=99.70% 00:13:28.531 nvme0n2: ios=3247/3584, merge=0/0, ticks=37087/39572, in_queue=76659, util=98.68% 00:13:28.531 nvme0n3: ios=3631/3917, merge=0/0, ticks=26698/25962, in_queue=52660, util=96.97% 00:13:28.531 nvme0n4: ios=3107/3584, merge=0/0, ticks=18839/24444, in_queue=43283, util=90.52% 00:13:28.531 17:36:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:28.531 [global] 00:13:28.531 thread=1 00:13:28.531 invalidate=1 00:13:28.531 rw=randwrite 00:13:28.531 time_based=1 00:13:28.531 runtime=1 00:13:28.531 ioengine=libaio 00:13:28.531 direct=1 00:13:28.531 bs=4096 00:13:28.531 iodepth=128 00:13:28.531 norandommap=0 00:13:28.531 numjobs=1 00:13:28.531 00:13:28.531 verify_dump=1 00:13:28.531 verify_backlog=512 00:13:28.531 verify_state_save=0 00:13:28.531 do_verify=1 00:13:28.531 verify=crc32c-intel 00:13:28.531 [job0] 00:13:28.531 filename=/dev/nvme0n1 00:13:28.531 [job1] 00:13:28.531 filename=/dev/nvme0n2 00:13:28.531 [job2] 00:13:28.531 filename=/dev/nvme0n3 00:13:28.531 [job3] 00:13:28.531 filename=/dev/nvme0n4 00:13:28.531 Could not set queue depth (nvme0n1) 00:13:28.531 Could not set queue depth (nvme0n2) 00:13:28.531 Could not set queue depth (nvme0n3) 00:13:28.531 Could not set queue depth (nvme0n4) 00:13:28.531 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.531 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.531 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.531 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:28.531 fio-3.35 00:13:28.531 Starting 4 threads 00:13:29.903 00:13:29.903 job0: (groupid=0, jobs=1): err= 0: pid=2217984: Mon Jul 15 17:36:24 2024 00:13:29.903 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:13:29.903 slat (usec): min=3, max=13429, avg=178.12, stdev=1030.18 00:13:29.903 clat (usec): min=15613, max=43278, avg=23014.27, stdev=5515.46 00:13:29.903 lat (usec): min=15620, max=43316, avg=23192.40, stdev=5601.99 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[16188], 5.00th=[17957], 10.00th=[18220], 20.00th=[19006], 00:13:29.903 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20579], 60.00th=[21365], 00:13:29.903 | 70.00th=[23200], 80.00th=[28181], 90.00th=[32900], 95.00th=[34341], 00:13:29.903 | 99.00th=[36439], 99.50th=[36963], 99.90th=[41681], 99.95th=[41681], 00:13:29.903 | 99.99th=[43254] 00:13:29.903 write: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1007msec); 0 zone resets 00:13:29.903 slat (usec): min=5, max=33950, avg=190.63, stdev=1119.10 00:13:29.903 clat (usec): min=3065, max=50936, avg=25118.94, stdev=9435.07 00:13:29.903 lat (usec): min=3076, max=51752, avg=25309.57, stdev=9499.11 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[ 3097], 5.00th=[11076], 10.00th=[14484], 20.00th=[17433], 00:13:29.903 | 30.00th=[20055], 40.00th=[21890], 50.00th=[25035], 60.00th=[26870], 00:13:29.903 | 70.00th=[28705], 80.00th=[30540], 90.00th=[38536], 95.00th=[45351], 00:13:29.903 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:13:29.903 | 99.99th=[51119] 00:13:29.903 bw ( KiB/s): min= 8640, max=12120, per=15.94%, avg=10380.00, stdev=2460.73, samples=2 00:13:29.903 iops : min= 2160, max= 3030, avg=2595.00, stdev=615.18, samples=2 00:13:29.903 lat (msec) : 4=0.59%, 10=1.53%, 20=32.83%, 50=64.92%, 100=0.13% 00:13:29.903 cpu : usr=2.58%, sys=5.37%, ctx=261, majf=0, minf=1 00:13:29.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:29.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.903 issued rwts: total=2560,2722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.903 job1: (groupid=0, jobs=1): err= 0: pid=2217985: Mon Jul 15 17:36:24 2024 00:13:29.903 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:13:29.903 slat (usec): min=2, max=43993, avg=82.34, stdev=775.44 00:13:29.903 clat (usec): min=554, max=52837, avg=11907.10, stdev=6948.77 00:13:29.903 lat (usec): min=559, max=53175, avg=11989.43, stdev=6976.03 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[ 3326], 5.00th=[ 5538], 10.00th=[ 8291], 20.00th=[ 9241], 00:13:29.903 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:13:29.903 | 70.00th=[11863], 80.00th=[12518], 90.00th=[15533], 95.00th=[17433], 00:13:29.903 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:13:29.903 | 99.99th=[52691] 00:13:29.903 write: IOPS=6063, BW=23.7MiB/s (24.8MB/s)(23.7MiB/1002msec); 0 zone resets 00:13:29.903 slat (usec): min=3, max=9790, avg=68.69, stdev=459.82 00:13:29.903 clat (usec): min=412, max=19630, avg=9886.02, stdev=2679.88 00:13:29.903 lat (usec): min=1479, max=20290, avg=9954.72, stdev=2686.34 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[ 2638], 5.00th=[ 5211], 10.00th=[ 6456], 20.00th=[ 7898], 00:13:29.903 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:13:29.903 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12780], 95.00th=[14615], 00:13:29.903 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18220], 99.95th=[19006], 00:13:29.903 | 99.99th=[19530] 00:13:29.903 bw ( KiB/s): min=23016, max=24576, per=36.54%, avg=23796.00, stdev=1103.09, samples=2 00:13:29.903 iops : min= 5754, max= 6144, avg=5949.00, stdev=275.77, samples=2 00:13:29.903 lat (usec) : 500=0.01%, 750=0.17% 00:13:29.903 lat (msec) : 2=0.18%, 4=2.49%, 10=37.79%, 20=58.20%, 50=0.09% 00:13:29.903 lat (msec) : 100=1.07% 00:13:29.903 cpu : usr=3.90%, sys=7.39%, ctx=512, majf=0, minf=1 00:13:29.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:29.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.903 issued rwts: total=5632,6076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.903 job2: (groupid=0, jobs=1): err= 0: pid=2217986: Mon Jul 15 17:36:24 2024 00:13:29.903 read: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1006msec) 00:13:29.903 slat (usec): min=2, max=14994, avg=153.87, stdev=919.82 00:13:29.903 clat (usec): min=4863, max=68018, avg=19194.45, stdev=6943.93 00:13:29.903 lat (usec): min=5616, max=76900, avg=19348.32, stdev=7023.71 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[ 5932], 5.00th=[10814], 10.00th=[13304], 20.00th=[14615], 00:13:29.903 | 30.00th=[15664], 40.00th=[16712], 50.00th=[17433], 60.00th=[19268], 00:13:29.903 | 70.00th=[21365], 80.00th=[23462], 90.00th=[27132], 95.00th=[29492], 00:13:29.903 | 99.00th=[33424], 99.50th=[65274], 99.90th=[67634], 99.95th=[67634], 00:13:29.903 | 99.99th=[67634] 00:13:29.903 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:13:29.903 slat (usec): min=4, max=8616, avg=177.18, stdev=695.26 00:13:29.903 clat (usec): min=633, max=45137, avg=24472.65, stdev=7410.80 00:13:29.903 lat (usec): min=946, max=45156, avg=24649.83, stdev=7459.84 00:13:29.903 clat percentiles (usec): 00:13:29.903 | 1.00th=[ 6194], 5.00th=[10814], 10.00th=[14353], 20.00th=[19006], 00:13:29.903 | 30.00th=[20317], 40.00th=[24511], 50.00th=[25560], 60.00th=[26870], 00:13:29.903 | 70.00th=[27919], 80.00th=[29492], 90.00th=[33817], 95.00th=[35914], 00:13:29.903 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:13:29.903 | 99.99th=[45351] 00:13:29.903 bw ( KiB/s): min=11624, max=12952, per=18.87%, avg=12288.00, stdev=939.04, samples=2 00:13:29.903 iops : min= 2906, max= 3238, avg=3072.00, stdev=234.76, samples=2 00:13:29.903 lat (usec) : 750=0.02%, 1000=0.03% 00:13:29.903 lat (msec) : 2=0.12%, 10=4.04%, 20=40.30%, 50=55.14%, 100=0.35% 00:13:29.903 cpu : usr=3.68%, sys=5.97%, ctx=410, majf=0, minf=1 00:13:29.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:29.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.904 issued rwts: total=2717,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.904 job3: (groupid=0, jobs=1): err= 0: pid=2217987: Mon Jul 15 17:36:24 2024 00:13:29.904 read: IOPS=4088, BW=16.0MiB/s (16.7MB/s)(16.2MiB/1012msec) 00:13:29.904 slat (usec): min=2, max=26149, avg=122.76, stdev=942.72 00:13:29.904 clat (usec): min=2632, max=57699, avg=15984.00, stdev=8673.55 00:13:29.904 lat (usec): min=2637, max=57728, avg=16106.76, stdev=8735.57 00:13:29.904 clat percentiles (usec): 00:13:29.904 | 1.00th=[ 6521], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11731], 00:13:29.904 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:13:29.904 | 70.00th=[14222], 80.00th=[18482], 90.00th=[27395], 95.00th=[33424], 00:13:29.904 | 99.00th=[52691], 99.50th=[55837], 99.90th=[57410], 99.95th=[57934], 00:13:29.904 | 99.99th=[57934] 00:13:29.904 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:13:29.904 slat (usec): min=3, max=10063, avg=94.45, stdev=464.00 00:13:29.904 clat (usec): min=965, max=57657, avg=13490.80, stdev=5491.01 00:13:29.904 lat (usec): min=990, max=57664, avg=13585.25, stdev=5519.83 00:13:29.904 clat percentiles (usec): 00:13:29.904 | 1.00th=[ 3294], 5.00th=[ 5080], 10.00th=[ 8586], 20.00th=[11207], 00:13:29.904 | 30.00th=[12125], 40.00th=[12649], 50.00th=[12911], 60.00th=[13435], 00:13:29.904 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16712], 95.00th=[21103], 00:13:29.904 | 99.00th=[38011], 99.50th=[41681], 99.90th=[49546], 99.95th=[49546], 00:13:29.904 | 99.99th=[57410] 00:13:29.904 bw ( KiB/s): min=15704, max=20480, per=27.78%, avg=18092.00, stdev=3377.14, samples=2 00:13:29.904 iops : min= 3926, max= 5120, avg=4523.00, stdev=844.29, samples=2 00:13:29.904 lat (usec) : 1000=0.02% 00:13:29.904 lat (msec) : 2=0.09%, 4=1.18%, 10=8.70%, 20=78.39%, 50=10.87% 00:13:29.904 lat (msec) : 100=0.74% 00:13:29.904 cpu : usr=3.86%, sys=7.81%, ctx=502, majf=0, minf=1 00:13:29.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:29.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.904 issued rwts: total=4138,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.904 00:13:29.904 Run status group 0 (all jobs): 00:13:29.904 READ: bw=58.1MiB/s (60.9MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=58.8MiB (61.6MB), run=1002-1012msec 00:13:29.904 WRITE: bw=63.6MiB/s (66.7MB/s), 10.6MiB/s-23.7MiB/s (11.1MB/s-24.8MB/s), io=64.4MiB (67.5MB), run=1002-1012msec 00:13:29.904 00:13:29.904 Disk stats (read/write): 00:13:29.904 nvme0n1: ios=2070/2297, merge=0/0, ticks=16869/18783, in_queue=35652, util=96.69% 00:13:29.904 nvme0n2: ios=4793/5120, merge=0/0, ticks=37669/33814, in_queue=71483, util=97.56% 00:13:29.904 nvme0n3: ios=2546/2560, merge=0/0, ticks=26750/29261, in_queue=56011, util=97.91% 00:13:29.904 nvme0n4: ios=3616/4079, merge=0/0, ticks=34307/27956, in_queue=62263, util=97.27% 00:13:29.904 17:36:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:29.904 17:36:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2218123 00:13:29.904 17:36:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:29.904 17:36:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:29.904 [global] 00:13:29.904 thread=1 00:13:29.904 invalidate=1 00:13:29.904 rw=read 00:13:29.904 time_based=1 00:13:29.904 runtime=10 00:13:29.904 ioengine=libaio 00:13:29.904 direct=1 00:13:29.904 bs=4096 00:13:29.904 iodepth=1 00:13:29.904 norandommap=1 00:13:29.904 numjobs=1 00:13:29.904 00:13:29.904 [job0] 00:13:29.904 filename=/dev/nvme0n1 00:13:29.904 [job1] 00:13:29.904 filename=/dev/nvme0n2 00:13:29.904 [job2] 00:13:29.904 filename=/dev/nvme0n3 00:13:29.904 [job3] 00:13:29.904 filename=/dev/nvme0n4 00:13:29.904 Could not set queue depth (nvme0n1) 00:13:29.904 Could not set queue depth (nvme0n2) 00:13:29.904 Could not set queue depth (nvme0n3) 00:13:29.904 Could not set queue depth (nvme0n4) 00:13:29.904 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.904 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.904 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.904 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.904 fio-3.35 00:13:29.904 Starting 4 threads 00:13:33.209 17:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:33.209 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:33.209 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=290816, buflen=4096 00:13:33.209 fio: pid=2218215, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.209 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.209 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:33.209 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=610304, buflen=4096 00:13:33.209 fio: pid=2218214, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.776 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9388032, buflen=4096 00:13:33.776 fio: pid=2218212, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.776 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.776 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:33.776 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=39612416, buflen=4096 00:13:33.776 fio: pid=2218213, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.776 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.776 17:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:34.035 00:13:34.035 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2218212: Mon Jul 15 17:36:28 2024 00:13:34.035 read: IOPS=658, BW=2632KiB/s (2695kB/s)(9168KiB/3483msec) 00:13:34.035 slat (usec): min=4, max=19482, avg=39.91, stdev=605.82 00:13:34.035 clat (usec): min=276, max=41961, avg=1466.01, stdev=6440.17 00:13:34.035 lat (usec): min=282, max=41994, avg=1505.94, stdev=6465.66 00:13:34.035 clat percentiles (usec): 00:13:34.035 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 330], 00:13:34.035 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 433], 00:13:34.035 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 519], 95.00th=[ 693], 00:13:34.035 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:13:34.035 | 99.99th=[42206] 00:13:34.035 bw ( KiB/s): min= 96, max= 8384, per=14.62%, avg=1894.67, stdev=3221.81, samples=6 00:13:34.035 iops : min= 24, max= 2096, avg=473.67, stdev=805.45, samples=6 00:13:34.035 lat (usec) : 500=86.04%, 750=10.68%, 1000=0.52% 00:13:34.035 lat (msec) : 2=0.09%, 20=0.04%, 50=2.57% 00:13:34.035 cpu : usr=0.52%, sys=1.12%, ctx=2298, majf=0, minf=1 00:13:34.035 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.035 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.035 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.035 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.035 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2218213: Mon Jul 15 17:36:28 2024 00:13:34.035 read: IOPS=2570, BW=10.0MiB/s (10.5MB/s)(37.8MiB/3762msec) 00:13:34.035 slat (usec): min=4, max=19450, avg=21.94, stdev=365.50 00:13:34.035 clat (usec): min=270, max=12007, avg=361.42, stdev=135.15 00:13:34.035 lat (usec): min=274, max=19988, avg=383.36, stdev=393.10 00:13:34.035 clat percentiles (usec): 00:13:34.035 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:13:34.035 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:13:34.035 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 474], 95.00th=[ 490], 00:13:34.035 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 742], 99.95th=[ 1045], 00:13:34.035 | 99.99th=[11994] 00:13:34.035 bw ( KiB/s): min= 8896, max=11480, per=80.41%, avg=10416.14, stdev=1012.73, samples=7 00:13:34.035 iops : min= 2224, max= 2870, avg=2604.00, stdev=253.22, samples=7 00:13:34.035 lat (usec) : 500=96.23%, 750=3.67%, 1000=0.04% 00:13:34.035 lat (msec) : 2=0.04%, 20=0.01% 00:13:34.035 cpu : usr=1.81%, sys=4.15%, ctx=9680, majf=0, minf=1 00:13:34.035 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.035 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.035 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.035 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.035 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2218214: Mon Jul 15 17:36:28 2024 00:13:34.035 read: IOPS=46, BW=186KiB/s (190kB/s)(596KiB/3210msec) 00:13:34.035 slat (nsec): min=7653, max=47199, avg=19031.84, stdev=9702.71 00:13:34.035 clat (usec): min=317, max=42378, avg=21367.36, stdev=20379.47 00:13:34.035 lat (usec): min=328, max=42413, avg=21386.42, stdev=20382.79 00:13:34.035 clat percentiles (usec): 00:13:34.035 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:13:34.035 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[41157], 60.00th=[41157], 00:13:34.035 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:34.035 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:34.035 | 99.99th=[42206] 00:13:34.036 bw ( KiB/s): min= 96, max= 632, per=1.48%, avg=192.00, stdev=215.91, samples=6 00:13:34.036 iops : min= 24, max= 158, avg=48.00, stdev=53.98, samples=6 00:13:34.036 lat (usec) : 500=46.00%, 750=2.00% 00:13:34.036 lat (msec) : 50=51.33% 00:13:34.036 cpu : usr=0.19%, sys=0.00%, ctx=151, majf=0, minf=1 00:13:34.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.036 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.036 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.036 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2218215: Mon Jul 15 17:36:28 2024 00:13:34.036 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(284KiB/2934msec) 00:13:34.036 slat (nsec): min=12852, max=36860, avg=21976.78, stdev=9043.83 00:13:34.036 clat (usec): min=40873, max=41247, avg=40976.75, stdev=49.15 00:13:34.036 lat (usec): min=40898, max=41271, avg=40998.55, stdev=47.06 00:13:34.036 clat percentiles (usec): 00:13:34.036 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:34.036 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:34.036 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:34.036 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.036 | 99.99th=[41157] 00:13:34.036 bw ( KiB/s): min= 96, max= 104, per=0.75%, avg=97.60, stdev= 3.58, samples=5 00:13:34.036 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:13:34.036 lat (msec) : 50=98.61% 00:13:34.036 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=1 00:13:34.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.036 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.036 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.036 00:13:34.036 Run status group 0 (all jobs): 00:13:34.036 READ: bw=12.6MiB/s (13.3MB/s), 96.8KiB/s-10.0MiB/s (99.1kB/s-10.5MB/s), io=47.6MiB (49.9MB), run=2934-3762msec 00:13:34.036 00:13:34.036 Disk stats (read/write): 00:13:34.036 nvme0n1: ios=2072/0, merge=0/0, ticks=3231/0, in_queue=3231, util=94.45% 00:13:34.036 nvme0n2: ios=9349/0, merge=0/0, ticks=3332/0, in_queue=3332, util=94.88% 00:13:34.036 nvme0n3: ios=186/0, merge=0/0, ticks=3253/0, in_queue=3253, util=99.66% 00:13:34.036 nvme0n4: ios=117/0, merge=0/0, ticks=3886/0, in_queue=3886, util=99.63% 00:13:34.036 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.036 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:34.313 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.314 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:34.589 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.589 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:34.866 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.866 17:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:35.124 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:35.124 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2218123 00:13:35.124 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:35.124 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:35.381 nvmf hotplug test: fio failed as expected 00:13:35.381 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.641 rmmod nvme_tcp 00:13:35.641 rmmod nvme_fabrics 00:13:35.641 rmmod nvme_keyring 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2216097 ']' 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2216097 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2216097 ']' 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2216097 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216097 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216097' 00:13:35.641 killing process with pid 2216097 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2216097 00:13:35.641 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2216097 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.901 17:36:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.440 17:36:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.440 00:13:38.440 real 0m23.418s 00:13:38.440 user 1m20.472s 00:13:38.440 sys 0m6.818s 00:13:38.440 17:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.440 17:36:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.440 ************************************ 00:13:38.440 END TEST nvmf_fio_target 00:13:38.440 ************************************ 00:13:38.440 17:36:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.440 17:36:33 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.440 17:36:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.440 17:36:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.440 17:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.440 ************************************ 00:13:38.440 START TEST nvmf_bdevio 00:13:38.440 ************************************ 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.440 * Looking for test storage... 00:13:38.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.440 17:36:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.441 17:36:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:39.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:39.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:39.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:39.820 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:40.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.079 17:36:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:13:40.079 00:13:40.079 --- 10.0.0.2 ping statistics --- 00:13:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.079 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:40.079 00:13:40.079 --- 10.0.0.1 ping statistics --- 00:13:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.079 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2220840 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2220840 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2220840 ']' 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.079 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.079 [2024-07-15 17:36:35.162797] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:13:40.079 [2024-07-15 17:36:35.162884] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.079 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.338 [2024-07-15 17:36:35.240533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.338 [2024-07-15 17:36:35.366847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.338 [2024-07-15 17:36:35.366913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.338 [2024-07-15 17:36:35.366930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.338 [2024-07-15 17:36:35.366944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.338 [2024-07-15 17:36:35.366956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.338 [2024-07-15 17:36:35.368903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.338 [2024-07-15 17:36:35.368963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.338 [2024-07-15 17:36:35.372948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.338 [2024-07-15 17:36:35.372954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.596 [2024-07-15 17:36:35.536696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.596 Malloc0 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.596 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.597 [2024-07-15 17:36:35.590391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:40.597 { 00:13:40.597 "params": { 00:13:40.597 "name": "Nvme$subsystem", 00:13:40.597 "trtype": "$TEST_TRANSPORT", 00:13:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.597 "adrfam": "ipv4", 00:13:40.597 "trsvcid": "$NVMF_PORT", 00:13:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.597 "hdgst": ${hdgst:-false}, 00:13:40.597 "ddgst": ${ddgst:-false} 00:13:40.597 }, 00:13:40.597 "method": "bdev_nvme_attach_controller" 00:13:40.597 } 00:13:40.597 EOF 00:13:40.597 )") 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:40.597 17:36:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:40.597 "params": { 00:13:40.597 "name": "Nvme1", 00:13:40.597 "trtype": "tcp", 00:13:40.597 "traddr": "10.0.0.2", 00:13:40.597 "adrfam": "ipv4", 00:13:40.597 "trsvcid": "4420", 00:13:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:40.597 "hdgst": false, 00:13:40.597 "ddgst": false 00:13:40.597 }, 00:13:40.597 "method": "bdev_nvme_attach_controller" 00:13:40.597 }' 00:13:40.597 [2024-07-15 17:36:35.638025] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:13:40.597 [2024-07-15 17:36:35.638105] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220870 ] 00:13:40.597 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.597 [2024-07-15 17:36:35.701131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.856 [2024-07-15 17:36:35.815871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.856 [2024-07-15 17:36:35.815922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.856 [2024-07-15 17:36:35.815926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.115 I/O targets: 00:13:41.115 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:41.115 00:13:41.115 00:13:41.115 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.115 http://cunit.sourceforge.net/ 00:13:41.115 00:13:41.115 00:13:41.115 Suite: bdevio tests on: Nvme1n1 00:13:41.115 Test: blockdev write read block ...passed 00:13:41.373 Test: blockdev write zeroes read block ...passed 00:13:41.373 Test: blockdev write zeroes read no split ...passed 00:13:41.373 Test: blockdev write zeroes read split ...passed 00:13:41.373 Test: blockdev write zeroes read split partial ...passed 00:13:41.373 Test: blockdev reset ...[2024-07-15 17:36:36.372730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:41.373 [2024-07-15 17:36:36.372846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64c580 (9): Bad file descriptor 00:13:41.632 [2024-07-15 17:36:36.523243] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:41.632 passed 00:13:41.632 Test: blockdev write read 8 blocks ...passed 00:13:41.632 Test: blockdev write read size > 128k ...passed 00:13:41.632 Test: blockdev write read invalid size ...passed 00:13:41.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:41.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:41.632 Test: blockdev write read max offset ...passed 00:13:41.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:41.632 Test: blockdev writev readv 8 blocks ...passed 00:13:41.632 Test: blockdev writev readv 30 x 1block ...passed 00:13:41.892 Test: blockdev writev readv block ...passed 00:13:41.892 Test: blockdev writev readv size > 128k ...passed 00:13:41.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:41.892 Test: blockdev comparev and writev ...[2024-07-15 17:36:36.781006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.892 [2024-07-15 17:36:36.781044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:41.892 [2024-07-15 17:36:36.781068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.892 [2024-07-15 17:36:36.781087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:41.892 [2024-07-15 17:36:36.781451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.892 [2024-07-15 17:36:36.781475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:41.892 [2024-07-15 17:36:36.781497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.893 [2024-07-15 17:36:36.781513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.781867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.893 [2024-07-15 17:36:36.781900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.781924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.893 [2024-07-15 17:36:36.781941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.782296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.893 [2024-07-15 17:36:36.782321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.782342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:41.893 [2024-07-15 17:36:36.782359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:41.893 passed 00:13:41.893 Test: blockdev nvme passthru rw ...passed 00:13:41.893 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:36:36.866200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.893 [2024-07-15 17:36:36.866268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.866476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.893 [2024-07-15 17:36:36.866501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.866688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.893 [2024-07-15 17:36:36.866713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:41.893 [2024-07-15 17:36:36.866904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.893 [2024-07-15 17:36:36.866929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:41.893 passed 00:13:41.893 Test: blockdev nvme admin passthru ...passed 00:13:41.893 Test: blockdev copy ...passed 00:13:41.893 00:13:41.893 Run Summary: Type Total Ran Passed Failed Inactive 00:13:41.893 suites 1 1 n/a 0 0 00:13:41.893 tests 23 23 23 0 0 00:13:41.893 asserts 152 152 152 0 n/a 00:13:41.893 00:13:41.893 Elapsed time = 1.524 seconds 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.153 rmmod nvme_tcp 00:13:42.153 rmmod nvme_fabrics 00:13:42.153 rmmod nvme_keyring 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2220840 ']' 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2220840 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2220840 ']' 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2220840 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220840 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220840' 00:13:42.153 killing process with pid 2220840 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2220840 00:13:42.153 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2220840 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.724 17:36:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.630 17:36:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.630 00:13:44.630 real 0m6.582s 00:13:44.630 user 0m12.288s 00:13:44.630 sys 0m2.017s 00:13:44.630 17:36:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.631 17:36:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.631 ************************************ 00:13:44.631 END TEST nvmf_bdevio 00:13:44.631 ************************************ 00:13:44.631 17:36:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:44.631 17:36:39 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:44.631 17:36:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.631 17:36:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.631 17:36:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.631 ************************************ 00:13:44.631 START TEST nvmf_auth_target 00:13:44.631 ************************************ 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:44.631 * Looking for test storage... 00:13:44.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.631 17:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.167 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:47.168 00:13:47.168 --- 10.0.0.2 ping statistics --- 00:13:47.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.168 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:47.168 00:13:47.168 --- 10.0.0.1 ping statistics --- 00:13:47.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.168 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2223056 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2223056 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2223056 ']' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.168 17:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2223080 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.168 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ad322d59ed1203b7526f2b552e15353ab17621467d74576c 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DmJ 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ad322d59ed1203b7526f2b552e15353ab17621467d74576c 0 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ad322d59ed1203b7526f2b552e15353ab17621467d74576c 0 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ad322d59ed1203b7526f2b552e15353ab17621467d74576c 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DmJ 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DmJ 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DmJ 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:47.427 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0af0ddbd230f130be55af3f66ed5b40df1392a5fd3364cc38b36b7c0b9bd4707 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xFz 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0af0ddbd230f130be55af3f66ed5b40df1392a5fd3364cc38b36b7c0b9bd4707 3 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0af0ddbd230f130be55af3f66ed5b40df1392a5fd3364cc38b36b7c0b9bd4707 3 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0af0ddbd230f130be55af3f66ed5b40df1392a5fd3364cc38b36b7c0b9bd4707 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xFz 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xFz 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xFz 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8c0e3e70a82319e2637b59c236221ca4 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SmZ 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8c0e3e70a82319e2637b59c236221ca4 1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8c0e3e70a82319e2637b59c236221ca4 1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8c0e3e70a82319e2637b59c236221ca4 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SmZ 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SmZ 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.SmZ 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77cb7920c1d809e1e1a2ed65d0a88c14f60f81c77bdd0d8b 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.t1v 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77cb7920c1d809e1e1a2ed65d0a88c14f60f81c77bdd0d8b 2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77cb7920c1d809e1e1a2ed65d0a88c14f60f81c77bdd0d8b 2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77cb7920c1d809e1e1a2ed65d0a88c14f60f81c77bdd0d8b 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.t1v 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.t1v 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.t1v 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f191d287089dcf6fdd6a60e33698df8cfbafd0faf7c11aa4 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rMB 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f191d287089dcf6fdd6a60e33698df8cfbafd0faf7c11aa4 2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f191d287089dcf6fdd6a60e33698df8cfbafd0faf7c11aa4 2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f191d287089dcf6fdd6a60e33698df8cfbafd0faf7c11aa4 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rMB 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rMB 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.rMB 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ccea8c4ccbe9bd6479fbb7e4b1bd8744 00:13:47.428 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.c56 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ccea8c4ccbe9bd6479fbb7e4b1bd8744 1 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ccea8c4ccbe9bd6479fbb7e4b1bd8744 1 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ccea8c4ccbe9bd6479fbb7e4b1bd8744 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.c56 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.c56 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.c56 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=813822fb705e86d31c7dc08cc0e2f108ca49aaad750e0d6bbd1a7011249c0835 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AMU 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 813822fb705e86d31c7dc08cc0e2f108ca49aaad750e0d6bbd1a7011249c0835 3 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 813822fb705e86d31c7dc08cc0e2f108ca49aaad750e0d6bbd1a7011249c0835 3 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=813822fb705e86d31c7dc08cc0e2f108ca49aaad750e0d6bbd1a7011249c0835 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AMU 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AMU 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.AMU 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2223056 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2223056 ']' 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.687 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2223080 /var/tmp/host.sock 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2223080 ']' 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:47.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.945 17:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DmJ 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.203 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DmJ 00:13:48.204 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DmJ 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xFz ]] 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xFz 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xFz 00:13:48.461 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xFz 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SmZ 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SmZ 00:13:48.719 17:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SmZ 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.t1v ]] 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t1v 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t1v 00:13:48.976 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t1v 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rMB 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rMB 00:13:49.234 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rMB 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.c56 ]] 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.c56 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.c56 00:13:49.492 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.c56 00:13:49.750 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:49.750 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.AMU 00:13:49.751 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.751 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.751 17:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.751 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.AMU 00:13:49.751 17:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.AMU 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.008 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.266 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.267 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.525 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.784 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.079 { 00:13:51.079 "cntlid": 1, 00:13:51.079 "qid": 0, 00:13:51.079 "state": "enabled", 00:13:51.079 "thread": "nvmf_tgt_poll_group_000", 00:13:51.079 "listen_address": { 00:13:51.079 "trtype": "TCP", 00:13:51.079 "adrfam": "IPv4", 00:13:51.079 "traddr": "10.0.0.2", 00:13:51.079 "trsvcid": "4420" 00:13:51.079 }, 00:13:51.079 "peer_address": { 00:13:51.079 "trtype": "TCP", 00:13:51.079 "adrfam": "IPv4", 00:13:51.079 "traddr": "10.0.0.1", 00:13:51.079 "trsvcid": "57508" 00:13:51.079 }, 00:13:51.079 "auth": { 00:13:51.079 "state": "completed", 00:13:51.079 "digest": "sha256", 00:13:51.079 "dhgroup": "null" 00:13:51.079 } 00:13:51.079 } 00:13:51.079 ]' 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:51.079 17:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.079 17:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.079 17:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.079 17:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.360 17:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:52.295 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.553 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.554 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.811 00:13:52.811 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.811 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.811 17:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.069 { 00:13:53.069 "cntlid": 3, 00:13:53.069 "qid": 0, 00:13:53.069 "state": "enabled", 00:13:53.069 "thread": "nvmf_tgt_poll_group_000", 00:13:53.069 "listen_address": { 00:13:53.069 "trtype": "TCP", 00:13:53.069 "adrfam": "IPv4", 00:13:53.069 "traddr": "10.0.0.2", 00:13:53.069 "trsvcid": "4420" 00:13:53.069 }, 00:13:53.069 "peer_address": { 00:13:53.069 "trtype": "TCP", 00:13:53.069 "adrfam": "IPv4", 00:13:53.069 "traddr": "10.0.0.1", 00:13:53.069 "trsvcid": "57538" 00:13:53.069 }, 00:13:53.069 "auth": { 00:13:53.069 "state": "completed", 00:13:53.069 "digest": "sha256", 00:13:53.069 "dhgroup": "null" 00:13:53.069 } 00:13:53.069 } 00:13:53.069 ]' 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.069 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.326 17:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.255 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.836 17:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.094 00:13:55.094 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.094 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.094 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.352 { 00:13:55.352 "cntlid": 5, 00:13:55.352 "qid": 0, 00:13:55.352 "state": "enabled", 00:13:55.352 "thread": "nvmf_tgt_poll_group_000", 00:13:55.352 "listen_address": { 00:13:55.352 "trtype": "TCP", 00:13:55.352 "adrfam": "IPv4", 00:13:55.352 "traddr": "10.0.0.2", 00:13:55.352 "trsvcid": "4420" 00:13:55.352 }, 00:13:55.352 "peer_address": { 00:13:55.352 "trtype": "TCP", 00:13:55.352 "adrfam": "IPv4", 00:13:55.352 "traddr": "10.0.0.1", 00:13:55.352 "trsvcid": "57556" 00:13:55.352 }, 00:13:55.352 "auth": { 00:13:55.352 "state": "completed", 00:13:55.352 "digest": "sha256", 00:13:55.352 "dhgroup": "null" 00:13:55.352 } 00:13:55.352 } 00:13:55.352 ]' 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.352 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.610 17:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.543 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:56.800 17:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:57.366 00:13:57.366 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.366 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.366 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.624 { 00:13:57.624 "cntlid": 7, 00:13:57.624 "qid": 0, 00:13:57.624 "state": "enabled", 00:13:57.624 "thread": "nvmf_tgt_poll_group_000", 00:13:57.624 "listen_address": { 00:13:57.624 "trtype": "TCP", 00:13:57.624 "adrfam": "IPv4", 00:13:57.624 "traddr": "10.0.0.2", 00:13:57.624 "trsvcid": "4420" 00:13:57.624 }, 00:13:57.624 "peer_address": { 00:13:57.624 "trtype": "TCP", 00:13:57.624 "adrfam": "IPv4", 00:13:57.624 "traddr": "10.0.0.1", 00:13:57.624 "trsvcid": "34312" 00:13:57.624 }, 00:13:57.624 "auth": { 00:13:57.624 "state": "completed", 00:13:57.624 "digest": "sha256", 00:13:57.624 "dhgroup": "null" 00:13:57.624 } 00:13:57.624 } 00:13:57.624 ]' 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.624 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.884 17:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.819 17:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.077 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.643 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.643 { 00:13:59.643 "cntlid": 9, 00:13:59.643 "qid": 0, 00:13:59.643 "state": "enabled", 00:13:59.643 "thread": "nvmf_tgt_poll_group_000", 00:13:59.643 "listen_address": { 00:13:59.643 "trtype": "TCP", 00:13:59.643 "adrfam": "IPv4", 00:13:59.643 "traddr": "10.0.0.2", 00:13:59.643 "trsvcid": "4420" 00:13:59.643 }, 00:13:59.643 "peer_address": { 00:13:59.643 "trtype": "TCP", 00:13:59.643 "adrfam": "IPv4", 00:13:59.643 "traddr": "10.0.0.1", 00:13:59.643 "trsvcid": "34332" 00:13:59.643 }, 00:13:59.643 "auth": { 00:13:59.643 "state": "completed", 00:13:59.643 "digest": "sha256", 00:13:59.643 "dhgroup": "ffdhe2048" 00:13:59.643 } 00:13:59.643 } 00:13:59.643 ]' 00:13:59.643 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.902 17:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.160 17:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.098 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.355 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:01.355 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.356 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.614 00:14:01.873 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.873 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.873 17:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.132 { 00:14:02.132 "cntlid": 11, 00:14:02.132 "qid": 0, 00:14:02.132 "state": "enabled", 00:14:02.132 "thread": "nvmf_tgt_poll_group_000", 00:14:02.132 "listen_address": { 00:14:02.132 "trtype": "TCP", 00:14:02.132 "adrfam": "IPv4", 00:14:02.132 "traddr": "10.0.0.2", 00:14:02.132 "trsvcid": "4420" 00:14:02.132 }, 00:14:02.132 "peer_address": { 00:14:02.132 "trtype": "TCP", 00:14:02.132 "adrfam": "IPv4", 00:14:02.132 "traddr": "10.0.0.1", 00:14:02.132 "trsvcid": "34360" 00:14:02.132 }, 00:14:02.132 "auth": { 00:14:02.132 "state": "completed", 00:14:02.132 "digest": "sha256", 00:14:02.132 "dhgroup": "ffdhe2048" 00:14:02.132 } 00:14:02.132 } 00:14:02.132 ]' 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.132 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.391 17:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.329 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.588 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.846 00:14:03.846 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.846 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.846 17:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.105 { 00:14:04.105 "cntlid": 13, 00:14:04.105 "qid": 0, 00:14:04.105 "state": "enabled", 00:14:04.105 "thread": "nvmf_tgt_poll_group_000", 00:14:04.105 "listen_address": { 00:14:04.105 "trtype": "TCP", 00:14:04.105 "adrfam": "IPv4", 00:14:04.105 "traddr": "10.0.0.2", 00:14:04.105 "trsvcid": "4420" 00:14:04.105 }, 00:14:04.105 "peer_address": { 00:14:04.105 "trtype": "TCP", 00:14:04.105 "adrfam": "IPv4", 00:14:04.105 "traddr": "10.0.0.1", 00:14:04.105 "trsvcid": "34390" 00:14:04.105 }, 00:14:04.105 "auth": { 00:14:04.105 "state": "completed", 00:14:04.105 "digest": "sha256", 00:14:04.105 "dhgroup": "ffdhe2048" 00:14:04.105 } 00:14:04.105 } 00:14:04.105 ]' 00:14:04.105 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.363 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.622 17:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.556 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.814 17:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:06.413 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.413 { 00:14:06.413 "cntlid": 15, 00:14:06.413 "qid": 0, 00:14:06.413 "state": "enabled", 00:14:06.413 "thread": "nvmf_tgt_poll_group_000", 00:14:06.413 "listen_address": { 00:14:06.413 "trtype": "TCP", 00:14:06.413 "adrfam": "IPv4", 00:14:06.413 "traddr": "10.0.0.2", 00:14:06.413 "trsvcid": "4420" 00:14:06.413 }, 00:14:06.413 "peer_address": { 00:14:06.413 "trtype": "TCP", 00:14:06.413 "adrfam": "IPv4", 00:14:06.413 "traddr": "10.0.0.1", 00:14:06.413 "trsvcid": "60320" 00:14:06.413 }, 00:14:06.413 "auth": { 00:14:06.413 "state": "completed", 00:14:06.413 "digest": "sha256", 00:14:06.413 "dhgroup": "ffdhe2048" 00:14:06.413 } 00:14:06.413 } 00:14:06.413 ]' 00:14:06.413 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.670 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.928 17:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:07.861 17:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.119 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.376 00:14:08.376 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.376 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.376 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.633 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.633 { 00:14:08.633 "cntlid": 17, 00:14:08.633 "qid": 0, 00:14:08.633 "state": "enabled", 00:14:08.633 "thread": "nvmf_tgt_poll_group_000", 00:14:08.633 "listen_address": { 00:14:08.633 "trtype": "TCP", 00:14:08.633 "adrfam": "IPv4", 00:14:08.633 "traddr": "10.0.0.2", 00:14:08.633 "trsvcid": "4420" 00:14:08.633 }, 00:14:08.633 "peer_address": { 00:14:08.633 "trtype": "TCP", 00:14:08.633 "adrfam": "IPv4", 00:14:08.633 "traddr": "10.0.0.1", 00:14:08.633 "trsvcid": "60342" 00:14:08.633 }, 00:14:08.633 "auth": { 00:14:08.633 "state": "completed", 00:14:08.633 "digest": "sha256", 00:14:08.633 "dhgroup": "ffdhe3072" 00:14:08.633 } 00:14:08.633 } 00:14:08.633 ]' 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.891 17:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.149 17:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:10.081 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.339 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.906 00:14:10.906 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.906 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.906 17:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.906 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.906 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.906 17:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.906 17:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.163 17:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.163 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.163 { 00:14:11.163 "cntlid": 19, 00:14:11.163 "qid": 0, 00:14:11.163 "state": "enabled", 00:14:11.163 "thread": "nvmf_tgt_poll_group_000", 00:14:11.163 "listen_address": { 00:14:11.163 "trtype": "TCP", 00:14:11.163 "adrfam": "IPv4", 00:14:11.163 "traddr": "10.0.0.2", 00:14:11.163 "trsvcid": "4420" 00:14:11.163 }, 00:14:11.163 "peer_address": { 00:14:11.163 "trtype": "TCP", 00:14:11.163 "adrfam": "IPv4", 00:14:11.163 "traddr": "10.0.0.1", 00:14:11.163 "trsvcid": "60378" 00:14:11.163 }, 00:14:11.163 "auth": { 00:14:11.163 "state": "completed", 00:14:11.163 "digest": "sha256", 00:14:11.163 "dhgroup": "ffdhe3072" 00:14:11.163 } 00:14:11.164 } 00:14:11.164 ]' 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.164 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.421 17:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.354 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.612 17:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.177 00:14:13.177 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.177 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.177 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.434 { 00:14:13.434 "cntlid": 21, 00:14:13.434 "qid": 0, 00:14:13.434 "state": "enabled", 00:14:13.434 "thread": "nvmf_tgt_poll_group_000", 00:14:13.434 "listen_address": { 00:14:13.434 "trtype": "TCP", 00:14:13.434 "adrfam": "IPv4", 00:14:13.434 "traddr": "10.0.0.2", 00:14:13.434 "trsvcid": "4420" 00:14:13.434 }, 00:14:13.434 "peer_address": { 00:14:13.434 "trtype": "TCP", 00:14:13.434 "adrfam": "IPv4", 00:14:13.434 "traddr": "10.0.0.1", 00:14:13.434 "trsvcid": "60412" 00:14:13.434 }, 00:14:13.434 "auth": { 00:14:13.434 "state": "completed", 00:14:13.434 "digest": "sha256", 00:14:13.434 "dhgroup": "ffdhe3072" 00:14:13.434 } 00:14:13.434 } 00:14:13.434 ]' 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.434 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.691 17:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.622 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.908 17:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:15.474 00:14:15.474 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.474 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.474 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.731 { 00:14:15.731 "cntlid": 23, 00:14:15.731 "qid": 0, 00:14:15.731 "state": "enabled", 00:14:15.731 "thread": "nvmf_tgt_poll_group_000", 00:14:15.731 "listen_address": { 00:14:15.731 "trtype": "TCP", 00:14:15.731 "adrfam": "IPv4", 00:14:15.731 "traddr": "10.0.0.2", 00:14:15.731 "trsvcid": "4420" 00:14:15.731 }, 00:14:15.731 "peer_address": { 00:14:15.731 "trtype": "TCP", 00:14:15.731 "adrfam": "IPv4", 00:14:15.731 "traddr": "10.0.0.1", 00:14:15.731 "trsvcid": "60430" 00:14:15.731 }, 00:14:15.731 "auth": { 00:14:15.731 "state": "completed", 00:14:15.731 "digest": "sha256", 00:14:15.731 "dhgroup": "ffdhe3072" 00:14:15.731 } 00:14:15.731 } 00:14:15.731 ]' 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.731 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.994 17:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:16.928 17:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.186 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.756 00:14:17.756 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.756 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.756 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.014 { 00:14:18.014 "cntlid": 25, 00:14:18.014 "qid": 0, 00:14:18.014 "state": "enabled", 00:14:18.014 "thread": "nvmf_tgt_poll_group_000", 00:14:18.014 "listen_address": { 00:14:18.014 "trtype": "TCP", 00:14:18.014 "adrfam": "IPv4", 00:14:18.014 "traddr": "10.0.0.2", 00:14:18.014 "trsvcid": "4420" 00:14:18.014 }, 00:14:18.014 "peer_address": { 00:14:18.014 "trtype": "TCP", 00:14:18.014 "adrfam": "IPv4", 00:14:18.014 "traddr": "10.0.0.1", 00:14:18.014 "trsvcid": "32796" 00:14:18.014 }, 00:14:18.014 "auth": { 00:14:18.014 "state": "completed", 00:14:18.014 "digest": "sha256", 00:14:18.014 "dhgroup": "ffdhe4096" 00:14:18.014 } 00:14:18.014 } 00:14:18.014 ]' 00:14:18.014 17:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.014 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.272 17:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.239 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.498 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.063 00:14:20.063 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.063 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.063 17:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.063 { 00:14:20.063 "cntlid": 27, 00:14:20.063 "qid": 0, 00:14:20.063 "state": "enabled", 00:14:20.063 "thread": "nvmf_tgt_poll_group_000", 00:14:20.063 "listen_address": { 00:14:20.063 "trtype": "TCP", 00:14:20.063 "adrfam": "IPv4", 00:14:20.063 "traddr": "10.0.0.2", 00:14:20.063 "trsvcid": "4420" 00:14:20.063 }, 00:14:20.063 "peer_address": { 00:14:20.063 "trtype": "TCP", 00:14:20.063 "adrfam": "IPv4", 00:14:20.063 "traddr": "10.0.0.1", 00:14:20.063 "trsvcid": "32836" 00:14:20.063 }, 00:14:20.063 "auth": { 00:14:20.063 "state": "completed", 00:14:20.063 "digest": "sha256", 00:14:20.063 "dhgroup": "ffdhe4096" 00:14:20.063 } 00:14:20.063 } 00:14:20.063 ]' 00:14:20.063 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.321 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.579 17:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.514 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.772 17:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.342 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.342 17:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.601 { 00:14:22.601 "cntlid": 29, 00:14:22.601 "qid": 0, 00:14:22.601 "state": "enabled", 00:14:22.601 "thread": "nvmf_tgt_poll_group_000", 00:14:22.601 "listen_address": { 00:14:22.601 "trtype": "TCP", 00:14:22.601 "adrfam": "IPv4", 00:14:22.601 "traddr": "10.0.0.2", 00:14:22.601 "trsvcid": "4420" 00:14:22.601 }, 00:14:22.601 "peer_address": { 00:14:22.601 "trtype": "TCP", 00:14:22.601 "adrfam": "IPv4", 00:14:22.601 "traddr": "10.0.0.1", 00:14:22.601 "trsvcid": "32868" 00:14:22.601 }, 00:14:22.601 "auth": { 00:14:22.601 "state": "completed", 00:14:22.601 "digest": "sha256", 00:14:22.601 "dhgroup": "ffdhe4096" 00:14:22.601 } 00:14:22.601 } 00:14:22.601 ]' 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.601 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.859 17:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.798 17:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.056 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.314 00:14:24.574 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.574 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.574 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.833 { 00:14:24.833 "cntlid": 31, 00:14:24.833 "qid": 0, 00:14:24.833 "state": "enabled", 00:14:24.833 "thread": "nvmf_tgt_poll_group_000", 00:14:24.833 "listen_address": { 00:14:24.833 "trtype": "TCP", 00:14:24.833 "adrfam": "IPv4", 00:14:24.833 "traddr": "10.0.0.2", 00:14:24.833 "trsvcid": "4420" 00:14:24.833 }, 00:14:24.833 "peer_address": { 00:14:24.833 "trtype": "TCP", 00:14:24.833 "adrfam": "IPv4", 00:14:24.833 "traddr": "10.0.0.1", 00:14:24.833 "trsvcid": "32898" 00:14:24.833 }, 00:14:24.833 "auth": { 00:14:24.833 "state": "completed", 00:14:24.833 "digest": "sha256", 00:14:24.833 "dhgroup": "ffdhe4096" 00:14:24.833 } 00:14:24.833 } 00:14:24.833 ]' 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.833 17:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.092 17:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:26.028 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.286 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.862 00:14:26.862 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.862 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.862 17:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.120 { 00:14:27.120 "cntlid": 33, 00:14:27.120 "qid": 0, 00:14:27.120 "state": "enabled", 00:14:27.120 "thread": "nvmf_tgt_poll_group_000", 00:14:27.120 "listen_address": { 00:14:27.120 "trtype": "TCP", 00:14:27.120 "adrfam": "IPv4", 00:14:27.120 "traddr": "10.0.0.2", 00:14:27.120 "trsvcid": "4420" 00:14:27.120 }, 00:14:27.120 "peer_address": { 00:14:27.120 "trtype": "TCP", 00:14:27.120 "adrfam": "IPv4", 00:14:27.120 "traddr": "10.0.0.1", 00:14:27.120 "trsvcid": "40140" 00:14:27.120 }, 00:14:27.120 "auth": { 00:14:27.120 "state": "completed", 00:14:27.120 "digest": "sha256", 00:14:27.120 "dhgroup": "ffdhe6144" 00:14:27.120 } 00:14:27.120 } 00:14:27.120 ]' 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.120 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.121 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.380 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.380 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.380 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.641 17:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.578 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.835 17:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.399 00:14:29.399 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.399 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.399 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.656 { 00:14:29.656 "cntlid": 35, 00:14:29.656 "qid": 0, 00:14:29.656 "state": "enabled", 00:14:29.656 "thread": "nvmf_tgt_poll_group_000", 00:14:29.656 "listen_address": { 00:14:29.656 "trtype": "TCP", 00:14:29.656 "adrfam": "IPv4", 00:14:29.656 "traddr": "10.0.0.2", 00:14:29.656 "trsvcid": "4420" 00:14:29.656 }, 00:14:29.656 "peer_address": { 00:14:29.656 "trtype": "TCP", 00:14:29.656 "adrfam": "IPv4", 00:14:29.656 "traddr": "10.0.0.1", 00:14:29.656 "trsvcid": "40168" 00:14:29.656 }, 00:14:29.656 "auth": { 00:14:29.656 "state": "completed", 00:14:29.656 "digest": "sha256", 00:14:29.656 "dhgroup": "ffdhe6144" 00:14:29.656 } 00:14:29.656 } 00:14:29.656 ]' 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.656 17:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.914 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:30.846 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.846 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.846 17:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.846 17:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.104 17:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.104 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.104 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.104 17:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.361 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.362 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.926 00:14:31.926 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.926 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.926 17:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.184 { 00:14:32.184 "cntlid": 37, 00:14:32.184 "qid": 0, 00:14:32.184 "state": "enabled", 00:14:32.184 "thread": "nvmf_tgt_poll_group_000", 00:14:32.184 "listen_address": { 00:14:32.184 "trtype": "TCP", 00:14:32.184 "adrfam": "IPv4", 00:14:32.184 "traddr": "10.0.0.2", 00:14:32.184 "trsvcid": "4420" 00:14:32.184 }, 00:14:32.184 "peer_address": { 00:14:32.184 "trtype": "TCP", 00:14:32.184 "adrfam": "IPv4", 00:14:32.184 "traddr": "10.0.0.1", 00:14:32.184 "trsvcid": "40188" 00:14:32.184 }, 00:14:32.184 "auth": { 00:14:32.184 "state": "completed", 00:14:32.184 "digest": "sha256", 00:14:32.184 "dhgroup": "ffdhe6144" 00:14:32.184 } 00:14:32.184 } 00:14:32.184 ]' 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.184 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.443 17:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.446 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.705 17:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.272 00:14:34.272 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.272 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.272 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.530 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.530 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.530 17:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.530 17:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.788 { 00:14:34.788 "cntlid": 39, 00:14:34.788 "qid": 0, 00:14:34.788 "state": "enabled", 00:14:34.788 "thread": "nvmf_tgt_poll_group_000", 00:14:34.788 "listen_address": { 00:14:34.788 "trtype": "TCP", 00:14:34.788 "adrfam": "IPv4", 00:14:34.788 "traddr": "10.0.0.2", 00:14:34.788 "trsvcid": "4420" 00:14:34.788 }, 00:14:34.788 "peer_address": { 00:14:34.788 "trtype": "TCP", 00:14:34.788 "adrfam": "IPv4", 00:14:34.788 "traddr": "10.0.0.1", 00:14:34.788 "trsvcid": "40214" 00:14:34.788 }, 00:14:34.788 "auth": { 00:14:34.788 "state": "completed", 00:14:34.788 "digest": "sha256", 00:14:34.788 "dhgroup": "ffdhe6144" 00:14:34.788 } 00:14:34.788 } 00:14:34.788 ]' 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.788 17:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.046 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:35.976 17:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.233 17:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.165 00:14:37.165 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.165 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.165 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.421 { 00:14:37.421 "cntlid": 41, 00:14:37.421 "qid": 0, 00:14:37.421 "state": "enabled", 00:14:37.421 "thread": "nvmf_tgt_poll_group_000", 00:14:37.421 "listen_address": { 00:14:37.421 "trtype": "TCP", 00:14:37.421 "adrfam": "IPv4", 00:14:37.421 "traddr": "10.0.0.2", 00:14:37.421 "trsvcid": "4420" 00:14:37.421 }, 00:14:37.421 "peer_address": { 00:14:37.421 "trtype": "TCP", 00:14:37.421 "adrfam": "IPv4", 00:14:37.421 "traddr": "10.0.0.1", 00:14:37.421 "trsvcid": "44348" 00:14:37.421 }, 00:14:37.421 "auth": { 00:14:37.421 "state": "completed", 00:14:37.421 "digest": "sha256", 00:14:37.421 "dhgroup": "ffdhe8192" 00:14:37.421 } 00:14:37.421 } 00:14:37.421 ]' 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.421 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.678 17:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.629 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.886 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:38.886 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.887 17:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.821 00:14:39.821 17:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.821 17:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.821 17:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.080 { 00:14:40.080 "cntlid": 43, 00:14:40.080 "qid": 0, 00:14:40.080 "state": "enabled", 00:14:40.080 "thread": "nvmf_tgt_poll_group_000", 00:14:40.080 "listen_address": { 00:14:40.080 "trtype": "TCP", 00:14:40.080 "adrfam": "IPv4", 00:14:40.080 "traddr": "10.0.0.2", 00:14:40.080 "trsvcid": "4420" 00:14:40.080 }, 00:14:40.080 "peer_address": { 00:14:40.080 "trtype": "TCP", 00:14:40.080 "adrfam": "IPv4", 00:14:40.080 "traddr": "10.0.0.1", 00:14:40.080 "trsvcid": "44362" 00:14:40.080 }, 00:14:40.080 "auth": { 00:14:40.080 "state": "completed", 00:14:40.080 "digest": "sha256", 00:14:40.080 "dhgroup": "ffdhe8192" 00:14:40.080 } 00:14:40.080 } 00:14:40.080 ]' 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.080 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.338 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.338 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.338 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.338 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.338 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.596 17:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.530 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.789 17:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.790 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.790 { 00:14:42.790 "cntlid": 45, 00:14:42.790 "qid": 0, 00:14:42.790 "state": "enabled", 00:14:42.790 "thread": "nvmf_tgt_poll_group_000", 00:14:42.790 "listen_address": { 00:14:42.790 "trtype": "TCP", 00:14:42.790 "adrfam": "IPv4", 00:14:42.790 "traddr": "10.0.0.2", 00:14:42.790 "trsvcid": "4420" 00:14:42.790 }, 00:14:42.790 "peer_address": { 00:14:42.790 "trtype": "TCP", 00:14:42.790 "adrfam": "IPv4", 00:14:42.790 "traddr": "10.0.0.1", 00:14:42.790 "trsvcid": "44384" 00:14:42.790 }, 00:14:42.790 "auth": { 00:14:42.790 "state": "completed", 00:14:42.790 "digest": "sha256", 00:14:42.790 "dhgroup": "ffdhe8192" 00:14:42.790 } 00:14:42.790 } 00:14:42.790 ]' 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.790 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.048 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.048 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.048 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.048 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.048 17:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.306 17:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.238 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.239 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.239 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.496 17:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.430 00:14:45.430 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.430 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.430 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.688 { 00:14:45.688 "cntlid": 47, 00:14:45.688 "qid": 0, 00:14:45.688 "state": "enabled", 00:14:45.688 "thread": "nvmf_tgt_poll_group_000", 00:14:45.688 "listen_address": { 00:14:45.688 "trtype": "TCP", 00:14:45.688 "adrfam": "IPv4", 00:14:45.688 "traddr": "10.0.0.2", 00:14:45.688 "trsvcid": "4420" 00:14:45.688 }, 00:14:45.688 "peer_address": { 00:14:45.688 "trtype": "TCP", 00:14:45.688 "adrfam": "IPv4", 00:14:45.688 "traddr": "10.0.0.1", 00:14:45.688 "trsvcid": "44418" 00:14:45.688 }, 00:14:45.688 "auth": { 00:14:45.688 "state": "completed", 00:14:45.688 "digest": "sha256", 00:14:45.688 "dhgroup": "ffdhe8192" 00:14:45.688 } 00:14:45.688 } 00:14:45.688 ]' 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.688 17:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.946 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.880 17:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.139 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.762 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.762 { 00:14:47.762 "cntlid": 49, 00:14:47.762 "qid": 0, 00:14:47.762 "state": "enabled", 00:14:47.762 "thread": "nvmf_tgt_poll_group_000", 00:14:47.762 "listen_address": { 00:14:47.762 "trtype": "TCP", 00:14:47.762 "adrfam": "IPv4", 00:14:47.762 "traddr": "10.0.0.2", 00:14:47.762 "trsvcid": "4420" 00:14:47.762 }, 00:14:47.762 "peer_address": { 00:14:47.762 "trtype": "TCP", 00:14:47.762 "adrfam": "IPv4", 00:14:47.762 "traddr": "10.0.0.1", 00:14:47.762 "trsvcid": "57654" 00:14:47.762 }, 00:14:47.762 "auth": { 00:14:47.762 "state": "completed", 00:14:47.762 "digest": "sha384", 00:14:47.762 "dhgroup": "null" 00:14:47.762 } 00:14:47.762 } 00:14:47.762 ]' 00:14:47.762 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.020 17:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.278 17:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.210 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.468 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.726 00:14:49.726 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.726 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.726 17:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.984 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.984 { 00:14:49.984 "cntlid": 51, 00:14:49.984 "qid": 0, 00:14:49.984 "state": "enabled", 00:14:49.984 "thread": "nvmf_tgt_poll_group_000", 00:14:49.984 "listen_address": { 00:14:49.984 "trtype": "TCP", 00:14:49.984 "adrfam": "IPv4", 00:14:49.984 "traddr": "10.0.0.2", 00:14:49.984 "trsvcid": "4420" 00:14:49.984 }, 00:14:49.984 "peer_address": { 00:14:49.984 "trtype": "TCP", 00:14:49.985 "adrfam": "IPv4", 00:14:49.985 "traddr": "10.0.0.1", 00:14:49.985 "trsvcid": "57676" 00:14:49.985 }, 00:14:49.985 "auth": { 00:14:49.985 "state": "completed", 00:14:49.985 "digest": "sha384", 00:14:49.985 "dhgroup": "null" 00:14:49.985 } 00:14:49.985 } 00:14:49.985 ]' 00:14:49.985 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.243 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.501 17:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.432 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.689 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.946 00:14:51.946 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.946 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.946 17:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.204 { 00:14:52.204 "cntlid": 53, 00:14:52.204 "qid": 0, 00:14:52.204 "state": "enabled", 00:14:52.204 "thread": "nvmf_tgt_poll_group_000", 00:14:52.204 "listen_address": { 00:14:52.204 "trtype": "TCP", 00:14:52.204 "adrfam": "IPv4", 00:14:52.204 "traddr": "10.0.0.2", 00:14:52.204 "trsvcid": "4420" 00:14:52.204 }, 00:14:52.204 "peer_address": { 00:14:52.204 "trtype": "TCP", 00:14:52.204 "adrfam": "IPv4", 00:14:52.204 "traddr": "10.0.0.1", 00:14:52.204 "trsvcid": "57712" 00:14:52.204 }, 00:14:52.204 "auth": { 00:14:52.204 "state": "completed", 00:14:52.204 "digest": "sha384", 00:14:52.204 "dhgroup": "null" 00:14:52.204 } 00:14:52.204 } 00:14:52.204 ]' 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:52.204 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.462 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.462 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.462 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.720 17:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.653 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.911 17:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.169 00:14:54.169 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.169 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.169 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.427 { 00:14:54.427 "cntlid": 55, 00:14:54.427 "qid": 0, 00:14:54.427 "state": "enabled", 00:14:54.427 "thread": "nvmf_tgt_poll_group_000", 00:14:54.427 "listen_address": { 00:14:54.427 "trtype": "TCP", 00:14:54.427 "adrfam": "IPv4", 00:14:54.427 "traddr": "10.0.0.2", 00:14:54.427 "trsvcid": "4420" 00:14:54.427 }, 00:14:54.427 "peer_address": { 00:14:54.427 "trtype": "TCP", 00:14:54.427 "adrfam": "IPv4", 00:14:54.427 "traddr": "10.0.0.1", 00:14:54.427 "trsvcid": "57740" 00:14:54.427 }, 00:14:54.427 "auth": { 00:14:54.427 "state": "completed", 00:14:54.427 "digest": "sha384", 00:14:54.427 "dhgroup": "null" 00:14:54.427 } 00:14:54.427 } 00:14:54.427 ]' 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.427 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.685 17:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:14:55.618 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:55.876 17:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.133 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.391 00:14:56.391 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.391 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.391 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.649 { 00:14:56.649 "cntlid": 57, 00:14:56.649 "qid": 0, 00:14:56.649 "state": "enabled", 00:14:56.649 "thread": "nvmf_tgt_poll_group_000", 00:14:56.649 "listen_address": { 00:14:56.649 "trtype": "TCP", 00:14:56.649 "adrfam": "IPv4", 00:14:56.649 "traddr": "10.0.0.2", 00:14:56.649 "trsvcid": "4420" 00:14:56.649 }, 00:14:56.649 "peer_address": { 00:14:56.649 "trtype": "TCP", 00:14:56.649 "adrfam": "IPv4", 00:14:56.649 "traddr": "10.0.0.1", 00:14:56.649 "trsvcid": "56534" 00:14:56.649 }, 00:14:56.649 "auth": { 00:14:56.649 "state": "completed", 00:14:56.649 "digest": "sha384", 00:14:56.649 "dhgroup": "ffdhe2048" 00:14:56.649 } 00:14:56.649 } 00:14:56.649 ]' 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.649 17:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.907 17:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:14:58.289 17:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.289 17:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.289 17:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.289 17:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.289 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.547 00:14:58.547 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.547 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.547 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.805 { 00:14:58.805 "cntlid": 59, 00:14:58.805 "qid": 0, 00:14:58.805 "state": "enabled", 00:14:58.805 "thread": "nvmf_tgt_poll_group_000", 00:14:58.805 "listen_address": { 00:14:58.805 "trtype": "TCP", 00:14:58.805 "adrfam": "IPv4", 00:14:58.805 "traddr": "10.0.0.2", 00:14:58.805 "trsvcid": "4420" 00:14:58.805 }, 00:14:58.805 "peer_address": { 00:14:58.805 "trtype": "TCP", 00:14:58.805 "adrfam": "IPv4", 00:14:58.805 "traddr": "10.0.0.1", 00:14:58.805 "trsvcid": "56568" 00:14:58.805 }, 00:14:58.805 "auth": { 00:14:58.805 "state": "completed", 00:14:58.805 "digest": "sha384", 00:14:58.805 "dhgroup": "ffdhe2048" 00:14:58.805 } 00:14:58.805 } 00:14:58.805 ]' 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.805 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.062 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.062 17:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.062 17:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.062 17:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.062 17:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.320 17:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.253 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.511 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.768 00:15:00.768 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.769 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.769 17:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.026 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.026 { 00:15:01.026 "cntlid": 61, 00:15:01.026 "qid": 0, 00:15:01.026 "state": "enabled", 00:15:01.026 "thread": "nvmf_tgt_poll_group_000", 00:15:01.026 "listen_address": { 00:15:01.026 "trtype": "TCP", 00:15:01.026 "adrfam": "IPv4", 00:15:01.026 "traddr": "10.0.0.2", 00:15:01.026 "trsvcid": "4420" 00:15:01.026 }, 00:15:01.026 "peer_address": { 00:15:01.026 "trtype": "TCP", 00:15:01.026 "adrfam": "IPv4", 00:15:01.026 "traddr": "10.0.0.1", 00:15:01.026 "trsvcid": "56590" 00:15:01.026 }, 00:15:01.027 "auth": { 00:15:01.027 "state": "completed", 00:15:01.027 "digest": "sha384", 00:15:01.027 "dhgroup": "ffdhe2048" 00:15:01.027 } 00:15:01.027 } 00:15:01.027 ]' 00:15:01.027 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.027 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.027 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.342 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.342 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.342 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.342 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.342 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.600 17:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.531 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.788 17:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:03.044 00:15:03.044 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.044 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.044 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.301 { 00:15:03.301 "cntlid": 63, 00:15:03.301 "qid": 0, 00:15:03.301 "state": "enabled", 00:15:03.301 "thread": "nvmf_tgt_poll_group_000", 00:15:03.301 "listen_address": { 00:15:03.301 "trtype": "TCP", 00:15:03.301 "adrfam": "IPv4", 00:15:03.301 "traddr": "10.0.0.2", 00:15:03.301 "trsvcid": "4420" 00:15:03.301 }, 00:15:03.301 "peer_address": { 00:15:03.301 "trtype": "TCP", 00:15:03.301 "adrfam": "IPv4", 00:15:03.301 "traddr": "10.0.0.1", 00:15:03.301 "trsvcid": "56612" 00:15:03.301 }, 00:15:03.301 "auth": { 00:15:03.301 "state": "completed", 00:15:03.301 "digest": "sha384", 00:15:03.301 "dhgroup": "ffdhe2048" 00:15:03.301 } 00:15:03.301 } 00:15:03.301 ]' 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.301 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.559 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.559 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.559 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.815 17:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:04.741 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.742 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.997 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:04.997 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.997 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.997 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.998 17:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.254 00:15:05.254 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.254 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.254 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.511 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.511 { 00:15:05.511 "cntlid": 65, 00:15:05.511 "qid": 0, 00:15:05.511 "state": "enabled", 00:15:05.511 "thread": "nvmf_tgt_poll_group_000", 00:15:05.511 "listen_address": { 00:15:05.511 "trtype": "TCP", 00:15:05.511 "adrfam": "IPv4", 00:15:05.511 "traddr": "10.0.0.2", 00:15:05.511 "trsvcid": "4420" 00:15:05.511 }, 00:15:05.511 "peer_address": { 00:15:05.511 "trtype": "TCP", 00:15:05.511 "adrfam": "IPv4", 00:15:05.511 "traddr": "10.0.0.1", 00:15:05.511 "trsvcid": "56652" 00:15:05.511 }, 00:15:05.511 "auth": { 00:15:05.512 "state": "completed", 00:15:05.512 "digest": "sha384", 00:15:05.512 "dhgroup": "ffdhe3072" 00:15:05.512 } 00:15:05.512 } 00:15:05.512 ]' 00:15:05.512 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.769 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.027 17:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:06.958 17:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.958 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.215 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.216 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.216 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.216 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.216 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.216 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.473 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.730 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.987 { 00:15:07.987 "cntlid": 67, 00:15:07.987 "qid": 0, 00:15:07.987 "state": "enabled", 00:15:07.987 "thread": "nvmf_tgt_poll_group_000", 00:15:07.987 "listen_address": { 00:15:07.987 "trtype": "TCP", 00:15:07.987 "adrfam": "IPv4", 00:15:07.987 "traddr": "10.0.0.2", 00:15:07.987 "trsvcid": "4420" 00:15:07.987 }, 00:15:07.987 "peer_address": { 00:15:07.987 "trtype": "TCP", 00:15:07.987 "adrfam": "IPv4", 00:15:07.987 "traddr": "10.0.0.1", 00:15:07.987 "trsvcid": "35166" 00:15:07.987 }, 00:15:07.987 "auth": { 00:15:07.987 "state": "completed", 00:15:07.987 "digest": "sha384", 00:15:07.987 "dhgroup": "ffdhe3072" 00:15:07.987 } 00:15:07.987 } 00:15:07.987 ]' 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.987 17:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.259 17:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.196 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.453 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.711 00:15:09.711 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.711 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.711 17:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.968 { 00:15:09.968 "cntlid": 69, 00:15:09.968 "qid": 0, 00:15:09.968 "state": "enabled", 00:15:09.968 "thread": "nvmf_tgt_poll_group_000", 00:15:09.968 "listen_address": { 00:15:09.968 "trtype": "TCP", 00:15:09.968 "adrfam": "IPv4", 00:15:09.968 "traddr": "10.0.0.2", 00:15:09.968 "trsvcid": "4420" 00:15:09.968 }, 00:15:09.968 "peer_address": { 00:15:09.968 "trtype": "TCP", 00:15:09.968 "adrfam": "IPv4", 00:15:09.968 "traddr": "10.0.0.1", 00:15:09.968 "trsvcid": "35204" 00:15:09.968 }, 00:15:09.968 "auth": { 00:15:09.968 "state": "completed", 00:15:09.968 "digest": "sha384", 00:15:09.968 "dhgroup": "ffdhe3072" 00:15:09.968 } 00:15:09.968 } 00:15:09.968 ]' 00:15:09.968 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.225 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.483 17:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.417 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.675 17:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.933 00:15:11.933 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.933 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.933 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.191 { 00:15:12.191 "cntlid": 71, 00:15:12.191 "qid": 0, 00:15:12.191 "state": "enabled", 00:15:12.191 "thread": "nvmf_tgt_poll_group_000", 00:15:12.191 "listen_address": { 00:15:12.191 "trtype": "TCP", 00:15:12.191 "adrfam": "IPv4", 00:15:12.191 "traddr": "10.0.0.2", 00:15:12.191 "trsvcid": "4420" 00:15:12.191 }, 00:15:12.191 "peer_address": { 00:15:12.191 "trtype": "TCP", 00:15:12.191 "adrfam": "IPv4", 00:15:12.191 "traddr": "10.0.0.1", 00:15:12.191 "trsvcid": "35240" 00:15:12.191 }, 00:15:12.191 "auth": { 00:15:12.191 "state": "completed", 00:15:12.191 "digest": "sha384", 00:15:12.191 "dhgroup": "ffdhe3072" 00:15:12.191 } 00:15:12.191 } 00:15:12.191 ]' 00:15:12.191 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.449 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.707 17:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.641 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.899 17:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.464 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.464 { 00:15:14.464 "cntlid": 73, 00:15:14.464 "qid": 0, 00:15:14.464 "state": "enabled", 00:15:14.464 "thread": "nvmf_tgt_poll_group_000", 00:15:14.464 "listen_address": { 00:15:14.464 "trtype": "TCP", 00:15:14.464 "adrfam": "IPv4", 00:15:14.464 "traddr": "10.0.0.2", 00:15:14.464 "trsvcid": "4420" 00:15:14.464 }, 00:15:14.464 "peer_address": { 00:15:14.464 "trtype": "TCP", 00:15:14.464 "adrfam": "IPv4", 00:15:14.464 "traddr": "10.0.0.1", 00:15:14.464 "trsvcid": "35272" 00:15:14.464 }, 00:15:14.464 "auth": { 00:15:14.464 "state": "completed", 00:15:14.464 "digest": "sha384", 00:15:14.464 "dhgroup": "ffdhe4096" 00:15:14.464 } 00:15:14.464 } 00:15:14.464 ]' 00:15:14.464 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.727 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.065 17:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:15.998 17:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.256 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.257 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.257 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.257 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.257 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.514 00:15:16.514 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.514 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.514 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.796 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.796 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.796 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.797 { 00:15:16.797 "cntlid": 75, 00:15:16.797 "qid": 0, 00:15:16.797 "state": "enabled", 00:15:16.797 "thread": "nvmf_tgt_poll_group_000", 00:15:16.797 "listen_address": { 00:15:16.797 "trtype": "TCP", 00:15:16.797 "adrfam": "IPv4", 00:15:16.797 "traddr": "10.0.0.2", 00:15:16.797 "trsvcid": "4420" 00:15:16.797 }, 00:15:16.797 "peer_address": { 00:15:16.797 "trtype": "TCP", 00:15:16.797 "adrfam": "IPv4", 00:15:16.797 "traddr": "10.0.0.1", 00:15:16.797 "trsvcid": "56924" 00:15:16.797 }, 00:15:16.797 "auth": { 00:15:16.797 "state": "completed", 00:15:16.797 "digest": "sha384", 00:15:16.797 "dhgroup": "ffdhe4096" 00:15:16.797 } 00:15:16.797 } 00:15:16.797 ]' 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.797 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.055 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.055 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.055 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.055 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.055 17:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.313 17:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.247 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.504 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.762 00:15:18.762 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.762 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.762 17:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.019 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.019 { 00:15:19.019 "cntlid": 77, 00:15:19.019 "qid": 0, 00:15:19.019 "state": "enabled", 00:15:19.019 "thread": "nvmf_tgt_poll_group_000", 00:15:19.019 "listen_address": { 00:15:19.019 "trtype": "TCP", 00:15:19.019 "adrfam": "IPv4", 00:15:19.019 "traddr": "10.0.0.2", 00:15:19.019 "trsvcid": "4420" 00:15:19.019 }, 00:15:19.019 "peer_address": { 00:15:19.019 "trtype": "TCP", 00:15:19.019 "adrfam": "IPv4", 00:15:19.019 "traddr": "10.0.0.1", 00:15:19.019 "trsvcid": "56954" 00:15:19.019 }, 00:15:19.019 "auth": { 00:15:19.019 "state": "completed", 00:15:19.019 "digest": "sha384", 00:15:19.019 "dhgroup": "ffdhe4096" 00:15:19.019 } 00:15:19.019 } 00:15:19.020 ]' 00:15:19.020 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.277 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.535 17:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.468 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.726 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.727 17:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.984 00:15:20.984 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.984 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.984 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.242 { 00:15:21.242 "cntlid": 79, 00:15:21.242 "qid": 0, 00:15:21.242 "state": "enabled", 00:15:21.242 "thread": "nvmf_tgt_poll_group_000", 00:15:21.242 "listen_address": { 00:15:21.242 "trtype": "TCP", 00:15:21.242 "adrfam": "IPv4", 00:15:21.242 "traddr": "10.0.0.2", 00:15:21.242 "trsvcid": "4420" 00:15:21.242 }, 00:15:21.242 "peer_address": { 00:15:21.242 "trtype": "TCP", 00:15:21.242 "adrfam": "IPv4", 00:15:21.242 "traddr": "10.0.0.1", 00:15:21.242 "trsvcid": "56972" 00:15:21.242 }, 00:15:21.242 "auth": { 00:15:21.242 "state": "completed", 00:15:21.242 "digest": "sha384", 00:15:21.242 "dhgroup": "ffdhe4096" 00:15:21.242 } 00:15:21.242 } 00:15:21.242 ]' 00:15:21.242 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.500 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.758 17:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.714 17:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:22.971 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:22.971 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.971 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.971 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:22.971 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.972 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.535 00:15:23.535 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.535 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.535 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.792 { 00:15:23.792 "cntlid": 81, 00:15:23.792 "qid": 0, 00:15:23.792 "state": "enabled", 00:15:23.792 "thread": "nvmf_tgt_poll_group_000", 00:15:23.792 "listen_address": { 00:15:23.792 "trtype": "TCP", 00:15:23.792 "adrfam": "IPv4", 00:15:23.792 "traddr": "10.0.0.2", 00:15:23.792 "trsvcid": "4420" 00:15:23.792 }, 00:15:23.792 "peer_address": { 00:15:23.792 "trtype": "TCP", 00:15:23.792 "adrfam": "IPv4", 00:15:23.792 "traddr": "10.0.0.1", 00:15:23.792 "trsvcid": "56984" 00:15:23.792 }, 00:15:23.792 "auth": { 00:15:23.792 "state": "completed", 00:15:23.792 "digest": "sha384", 00:15:23.792 "dhgroup": "ffdhe6144" 00:15:23.792 } 00:15:23.792 } 00:15:23.792 ]' 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.792 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.050 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.050 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.050 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.050 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.050 17:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.307 17:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.268 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.525 17:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.088 00:15:26.088 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.088 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.088 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.345 { 00:15:26.345 "cntlid": 83, 00:15:26.345 "qid": 0, 00:15:26.345 "state": "enabled", 00:15:26.345 "thread": "nvmf_tgt_poll_group_000", 00:15:26.345 "listen_address": { 00:15:26.345 "trtype": "TCP", 00:15:26.345 "adrfam": "IPv4", 00:15:26.345 "traddr": "10.0.0.2", 00:15:26.345 "trsvcid": "4420" 00:15:26.345 }, 00:15:26.345 "peer_address": { 00:15:26.345 "trtype": "TCP", 00:15:26.345 "adrfam": "IPv4", 00:15:26.345 "traddr": "10.0.0.1", 00:15:26.345 "trsvcid": "56998" 00:15:26.345 }, 00:15:26.345 "auth": { 00:15:26.345 "state": "completed", 00:15:26.345 "digest": "sha384", 00:15:26.345 "dhgroup": "ffdhe6144" 00:15:26.345 } 00:15:26.345 } 00:15:26.345 ]' 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.345 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.603 17:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.976 17:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.565 00:15:28.565 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.565 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.565 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.826 { 00:15:28.826 "cntlid": 85, 00:15:28.826 "qid": 0, 00:15:28.826 "state": "enabled", 00:15:28.826 "thread": "nvmf_tgt_poll_group_000", 00:15:28.826 "listen_address": { 00:15:28.826 "trtype": "TCP", 00:15:28.826 "adrfam": "IPv4", 00:15:28.826 "traddr": "10.0.0.2", 00:15:28.826 "trsvcid": "4420" 00:15:28.826 }, 00:15:28.826 "peer_address": { 00:15:28.826 "trtype": "TCP", 00:15:28.826 "adrfam": "IPv4", 00:15:28.826 "traddr": "10.0.0.1", 00:15:28.826 "trsvcid": "34190" 00:15:28.826 }, 00:15:28.826 "auth": { 00:15:28.826 "state": "completed", 00:15:28.826 "digest": "sha384", 00:15:28.826 "dhgroup": "ffdhe6144" 00:15:28.826 } 00:15:28.826 } 00:15:28.826 ]' 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.826 17:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.125 17:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.059 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.316 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.884 00:15:30.884 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.884 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.884 17:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.142 { 00:15:31.142 "cntlid": 87, 00:15:31.142 "qid": 0, 00:15:31.142 "state": "enabled", 00:15:31.142 "thread": "nvmf_tgt_poll_group_000", 00:15:31.142 "listen_address": { 00:15:31.142 "trtype": "TCP", 00:15:31.142 "adrfam": "IPv4", 00:15:31.142 "traddr": "10.0.0.2", 00:15:31.142 "trsvcid": "4420" 00:15:31.142 }, 00:15:31.142 "peer_address": { 00:15:31.142 "trtype": "TCP", 00:15:31.142 "adrfam": "IPv4", 00:15:31.142 "traddr": "10.0.0.1", 00:15:31.142 "trsvcid": "34206" 00:15:31.142 }, 00:15:31.142 "auth": { 00:15:31.142 "state": "completed", 00:15:31.142 "digest": "sha384", 00:15:31.142 "dhgroup": "ffdhe6144" 00:15:31.142 } 00:15:31.142 } 00:15:31.142 ]' 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.142 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.400 17:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.333 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.899 17:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.833 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.833 { 00:15:33.833 "cntlid": 89, 00:15:33.833 "qid": 0, 00:15:33.833 "state": "enabled", 00:15:33.833 "thread": "nvmf_tgt_poll_group_000", 00:15:33.833 "listen_address": { 00:15:33.833 "trtype": "TCP", 00:15:33.833 "adrfam": "IPv4", 00:15:33.833 "traddr": "10.0.0.2", 00:15:33.833 "trsvcid": "4420" 00:15:33.833 }, 00:15:33.833 "peer_address": { 00:15:33.833 "trtype": "TCP", 00:15:33.833 "adrfam": "IPv4", 00:15:33.833 "traddr": "10.0.0.1", 00:15:33.833 "trsvcid": "34238" 00:15:33.833 }, 00:15:33.833 "auth": { 00:15:33.833 "state": "completed", 00:15:33.833 "digest": "sha384", 00:15:33.833 "dhgroup": "ffdhe8192" 00:15:33.833 } 00:15:33.833 } 00:15:33.833 ]' 00:15:33.833 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.091 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.091 17:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.091 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.091 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.091 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.091 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.091 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.349 17:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:35.283 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.540 17:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.473 00:15:36.473 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.473 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.473 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.730 { 00:15:36.730 "cntlid": 91, 00:15:36.730 "qid": 0, 00:15:36.730 "state": "enabled", 00:15:36.730 "thread": "nvmf_tgt_poll_group_000", 00:15:36.730 "listen_address": { 00:15:36.730 "trtype": "TCP", 00:15:36.730 "adrfam": "IPv4", 00:15:36.730 "traddr": "10.0.0.2", 00:15:36.730 "trsvcid": "4420" 00:15:36.730 }, 00:15:36.730 "peer_address": { 00:15:36.730 "trtype": "TCP", 00:15:36.730 "adrfam": "IPv4", 00:15:36.730 "traddr": "10.0.0.1", 00:15:36.730 "trsvcid": "34268" 00:15:36.730 }, 00:15:36.730 "auth": { 00:15:36.730 "state": "completed", 00:15:36.730 "digest": "sha384", 00:15:36.730 "dhgroup": "ffdhe8192" 00:15:36.730 } 00:15:36.730 } 00:15:36.730 ]' 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.730 17:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.987 17:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:37.920 17:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.920 17:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.920 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.178 17:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.113 00:15:39.113 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.113 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.113 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.371 { 00:15:39.371 "cntlid": 93, 00:15:39.371 "qid": 0, 00:15:39.371 "state": "enabled", 00:15:39.371 "thread": "nvmf_tgt_poll_group_000", 00:15:39.371 "listen_address": { 00:15:39.371 "trtype": "TCP", 00:15:39.371 "adrfam": "IPv4", 00:15:39.371 "traddr": "10.0.0.2", 00:15:39.371 "trsvcid": "4420" 00:15:39.371 }, 00:15:39.371 "peer_address": { 00:15:39.371 "trtype": "TCP", 00:15:39.371 "adrfam": "IPv4", 00:15:39.371 "traddr": "10.0.0.1", 00:15:39.371 "trsvcid": "38054" 00:15:39.371 }, 00:15:39.371 "auth": { 00:15:39.371 "state": "completed", 00:15:39.371 "digest": "sha384", 00:15:39.371 "dhgroup": "ffdhe8192" 00:15:39.371 } 00:15:39.371 } 00:15:39.371 ]' 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.371 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.629 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.629 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.629 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.629 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.629 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.888 17:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.823 17:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.081 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.017 00:15:42.017 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.017 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.017 17:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.275 { 00:15:42.275 "cntlid": 95, 00:15:42.275 "qid": 0, 00:15:42.275 "state": "enabled", 00:15:42.275 "thread": "nvmf_tgt_poll_group_000", 00:15:42.275 "listen_address": { 00:15:42.275 "trtype": "TCP", 00:15:42.275 "adrfam": "IPv4", 00:15:42.275 "traddr": "10.0.0.2", 00:15:42.275 "trsvcid": "4420" 00:15:42.275 }, 00:15:42.275 "peer_address": { 00:15:42.275 "trtype": "TCP", 00:15:42.275 "adrfam": "IPv4", 00:15:42.275 "traddr": "10.0.0.1", 00:15:42.275 "trsvcid": "38084" 00:15:42.275 }, 00:15:42.275 "auth": { 00:15:42.275 "state": "completed", 00:15:42.275 "digest": "sha384", 00:15:42.275 "dhgroup": "ffdhe8192" 00:15:42.275 } 00:15:42.275 } 00:15:42.275 ]' 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.275 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.533 17:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:43.504 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.762 17:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.020 00:15:44.020 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.020 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.020 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.279 { 00:15:44.279 "cntlid": 97, 00:15:44.279 "qid": 0, 00:15:44.279 "state": "enabled", 00:15:44.279 "thread": "nvmf_tgt_poll_group_000", 00:15:44.279 "listen_address": { 00:15:44.279 "trtype": "TCP", 00:15:44.279 "adrfam": "IPv4", 00:15:44.279 "traddr": "10.0.0.2", 00:15:44.279 "trsvcid": "4420" 00:15:44.279 }, 00:15:44.279 "peer_address": { 00:15:44.279 "trtype": "TCP", 00:15:44.279 "adrfam": "IPv4", 00:15:44.279 "traddr": "10.0.0.1", 00:15:44.279 "trsvcid": "38102" 00:15:44.279 }, 00:15:44.279 "auth": { 00:15:44.279 "state": "completed", 00:15:44.279 "digest": "sha512", 00:15:44.279 "dhgroup": "null" 00:15:44.279 } 00:15:44.279 } 00:15:44.279 ]' 00:15:44.279 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.535 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.792 17:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:45.728 17:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.985 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.242 00:15:46.242 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.242 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.242 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.498 { 00:15:46.498 "cntlid": 99, 00:15:46.498 "qid": 0, 00:15:46.498 "state": "enabled", 00:15:46.498 "thread": "nvmf_tgt_poll_group_000", 00:15:46.498 "listen_address": { 00:15:46.498 "trtype": "TCP", 00:15:46.498 "adrfam": "IPv4", 00:15:46.498 "traddr": "10.0.0.2", 00:15:46.498 "trsvcid": "4420" 00:15:46.498 }, 00:15:46.498 "peer_address": { 00:15:46.498 "trtype": "TCP", 00:15:46.498 "adrfam": "IPv4", 00:15:46.498 "traddr": "10.0.0.1", 00:15:46.498 "trsvcid": "54450" 00:15:46.498 }, 00:15:46.498 "auth": { 00:15:46.498 "state": "completed", 00:15:46.498 "digest": "sha512", 00:15:46.498 "dhgroup": "null" 00:15:46.498 } 00:15:46.498 } 00:15:46.498 ]' 00:15:46.498 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.754 17:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.010 17:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:47.945 17:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.945 17:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.945 17:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.945 17:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.945 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.945 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.945 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.945 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.212 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.213 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.779 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.779 { 00:15:48.779 "cntlid": 101, 00:15:48.779 "qid": 0, 00:15:48.779 "state": "enabled", 00:15:48.779 "thread": "nvmf_tgt_poll_group_000", 00:15:48.779 "listen_address": { 00:15:48.779 "trtype": "TCP", 00:15:48.779 "adrfam": "IPv4", 00:15:48.779 "traddr": "10.0.0.2", 00:15:48.779 "trsvcid": "4420" 00:15:48.779 }, 00:15:48.779 "peer_address": { 00:15:48.779 "trtype": "TCP", 00:15:48.779 "adrfam": "IPv4", 00:15:48.779 "traddr": "10.0.0.1", 00:15:48.779 "trsvcid": "54478" 00:15:48.779 }, 00:15:48.779 "auth": { 00:15:48.779 "state": "completed", 00:15:48.779 "digest": "sha512", 00:15:48.779 "dhgroup": "null" 00:15:48.779 } 00:15:48.779 } 00:15:48.779 ]' 00:15:48.779 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.037 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.037 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.037 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.037 17:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.037 17:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.037 17:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.037 17:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.294 17:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.232 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.490 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:50.490 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.490 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.490 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.490 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.491 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.749 00:15:50.749 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.749 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.749 17:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.007 { 00:15:51.007 "cntlid": 103, 00:15:51.007 "qid": 0, 00:15:51.007 "state": "enabled", 00:15:51.007 "thread": "nvmf_tgt_poll_group_000", 00:15:51.007 "listen_address": { 00:15:51.007 "trtype": "TCP", 00:15:51.007 "adrfam": "IPv4", 00:15:51.007 "traddr": "10.0.0.2", 00:15:51.007 "trsvcid": "4420" 00:15:51.007 }, 00:15:51.007 "peer_address": { 00:15:51.007 "trtype": "TCP", 00:15:51.007 "adrfam": "IPv4", 00:15:51.007 "traddr": "10.0.0.1", 00:15:51.007 "trsvcid": "54514" 00:15:51.007 }, 00:15:51.007 "auth": { 00:15:51.007 "state": "completed", 00:15:51.007 "digest": "sha512", 00:15:51.007 "dhgroup": "null" 00:15:51.007 } 00:15:51.007 } 00:15:51.007 ]' 00:15:51.007 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.266 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.525 17:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:52.459 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.718 17:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.977 00:15:52.977 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.977 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.977 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.236 { 00:15:53.236 "cntlid": 105, 00:15:53.236 "qid": 0, 00:15:53.236 "state": "enabled", 00:15:53.236 "thread": "nvmf_tgt_poll_group_000", 00:15:53.236 "listen_address": { 00:15:53.236 "trtype": "TCP", 00:15:53.236 "adrfam": "IPv4", 00:15:53.236 "traddr": "10.0.0.2", 00:15:53.236 "trsvcid": "4420" 00:15:53.236 }, 00:15:53.236 "peer_address": { 00:15:53.236 "trtype": "TCP", 00:15:53.236 "adrfam": "IPv4", 00:15:53.236 "traddr": "10.0.0.1", 00:15:53.236 "trsvcid": "54538" 00:15:53.236 }, 00:15:53.236 "auth": { 00:15:53.236 "state": "completed", 00:15:53.236 "digest": "sha512", 00:15:53.236 "dhgroup": "ffdhe2048" 00:15:53.236 } 00:15:53.236 } 00:15:53.236 ]' 00:15:53.236 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.495 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.753 17:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.687 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.945 17:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.202 00:15:55.202 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.202 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.202 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.459 { 00:15:55.459 "cntlid": 107, 00:15:55.459 "qid": 0, 00:15:55.459 "state": "enabled", 00:15:55.459 "thread": "nvmf_tgt_poll_group_000", 00:15:55.459 "listen_address": { 00:15:55.459 "trtype": "TCP", 00:15:55.459 "adrfam": "IPv4", 00:15:55.459 "traddr": "10.0.0.2", 00:15:55.459 "trsvcid": "4420" 00:15:55.459 }, 00:15:55.459 "peer_address": { 00:15:55.459 "trtype": "TCP", 00:15:55.459 "adrfam": "IPv4", 00:15:55.459 "traddr": "10.0.0.1", 00:15:55.459 "trsvcid": "54572" 00:15:55.459 }, 00:15:55.459 "auth": { 00:15:55.459 "state": "completed", 00:15:55.459 "digest": "sha512", 00:15:55.459 "dhgroup": "ffdhe2048" 00:15:55.459 } 00:15:55.459 } 00:15:55.459 ]' 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.459 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.717 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.717 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.717 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.717 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.717 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.974 17:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.953 17:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.211 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.469 00:15:57.469 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.469 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.469 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.727 { 00:15:57.727 "cntlid": 109, 00:15:57.727 "qid": 0, 00:15:57.727 "state": "enabled", 00:15:57.727 "thread": "nvmf_tgt_poll_group_000", 00:15:57.727 "listen_address": { 00:15:57.727 "trtype": "TCP", 00:15:57.727 "adrfam": "IPv4", 00:15:57.727 "traddr": "10.0.0.2", 00:15:57.727 "trsvcid": "4420" 00:15:57.727 }, 00:15:57.727 "peer_address": { 00:15:57.727 "trtype": "TCP", 00:15:57.727 "adrfam": "IPv4", 00:15:57.727 "traddr": "10.0.0.1", 00:15:57.727 "trsvcid": "53270" 00:15:57.727 }, 00:15:57.727 "auth": { 00:15:57.727 "state": "completed", 00:15:57.727 "digest": "sha512", 00:15:57.727 "dhgroup": "ffdhe2048" 00:15:57.727 } 00:15:57.727 } 00:15:57.727 ]' 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.727 17:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.297 17:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.230 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.487 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.745 00:15:59.745 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.745 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.745 17:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.003 { 00:16:00.003 "cntlid": 111, 00:16:00.003 "qid": 0, 00:16:00.003 "state": "enabled", 00:16:00.003 "thread": "nvmf_tgt_poll_group_000", 00:16:00.003 "listen_address": { 00:16:00.003 "trtype": "TCP", 00:16:00.003 "adrfam": "IPv4", 00:16:00.003 "traddr": "10.0.0.2", 00:16:00.003 "trsvcid": "4420" 00:16:00.003 }, 00:16:00.003 "peer_address": { 00:16:00.003 "trtype": "TCP", 00:16:00.003 "adrfam": "IPv4", 00:16:00.003 "traddr": "10.0.0.1", 00:16:00.003 "trsvcid": "53302" 00:16:00.003 }, 00:16:00.003 "auth": { 00:16:00.003 "state": "completed", 00:16:00.003 "digest": "sha512", 00:16:00.003 "dhgroup": "ffdhe2048" 00:16:00.003 } 00:16:00.003 } 00:16:00.003 ]' 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.003 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.261 17:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.191 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.756 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.014 00:16:02.014 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.014 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.014 17:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.271 { 00:16:02.271 "cntlid": 113, 00:16:02.271 "qid": 0, 00:16:02.271 "state": "enabled", 00:16:02.271 "thread": "nvmf_tgt_poll_group_000", 00:16:02.271 "listen_address": { 00:16:02.271 "trtype": "TCP", 00:16:02.271 "adrfam": "IPv4", 00:16:02.271 "traddr": "10.0.0.2", 00:16:02.271 "trsvcid": "4420" 00:16:02.271 }, 00:16:02.271 "peer_address": { 00:16:02.271 "trtype": "TCP", 00:16:02.271 "adrfam": "IPv4", 00:16:02.271 "traddr": "10.0.0.1", 00:16:02.271 "trsvcid": "53328" 00:16:02.271 }, 00:16:02.271 "auth": { 00:16:02.271 "state": "completed", 00:16:02.271 "digest": "sha512", 00:16:02.271 "dhgroup": "ffdhe3072" 00:16:02.271 } 00:16:02.271 } 00:16:02.271 ]' 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.271 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.529 17:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.460 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.718 17:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.284 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.284 { 00:16:04.284 "cntlid": 115, 00:16:04.284 "qid": 0, 00:16:04.284 "state": "enabled", 00:16:04.284 "thread": "nvmf_tgt_poll_group_000", 00:16:04.284 "listen_address": { 00:16:04.284 "trtype": "TCP", 00:16:04.284 "adrfam": "IPv4", 00:16:04.284 "traddr": "10.0.0.2", 00:16:04.284 "trsvcid": "4420" 00:16:04.284 }, 00:16:04.284 "peer_address": { 00:16:04.284 "trtype": "TCP", 00:16:04.284 "adrfam": "IPv4", 00:16:04.284 "traddr": "10.0.0.1", 00:16:04.284 "trsvcid": "53358" 00:16:04.284 }, 00:16:04.284 "auth": { 00:16:04.284 "state": "completed", 00:16:04.284 "digest": "sha512", 00:16:04.284 "dhgroup": "ffdhe3072" 00:16:04.284 } 00:16:04.284 } 00:16:04.284 ]' 00:16:04.284 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.543 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.800 17:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.733 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.991 17:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.249 00:16:06.249 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.249 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.249 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.507 { 00:16:06.507 "cntlid": 117, 00:16:06.507 "qid": 0, 00:16:06.507 "state": "enabled", 00:16:06.507 "thread": "nvmf_tgt_poll_group_000", 00:16:06.507 "listen_address": { 00:16:06.507 "trtype": "TCP", 00:16:06.507 "adrfam": "IPv4", 00:16:06.507 "traddr": "10.0.0.2", 00:16:06.507 "trsvcid": "4420" 00:16:06.507 }, 00:16:06.507 "peer_address": { 00:16:06.507 "trtype": "TCP", 00:16:06.507 "adrfam": "IPv4", 00:16:06.507 "traddr": "10.0.0.1", 00:16:06.507 "trsvcid": "43216" 00:16:06.507 }, 00:16:06.507 "auth": { 00:16:06.507 "state": "completed", 00:16:06.507 "digest": "sha512", 00:16:06.507 "dhgroup": "ffdhe3072" 00:16:06.507 } 00:16:06.507 } 00:16:06.507 ]' 00:16:06.507 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.764 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.022 17:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.956 17:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.216 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.474 00:16:08.474 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.474 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.474 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.732 { 00:16:08.732 "cntlid": 119, 00:16:08.732 "qid": 0, 00:16:08.732 "state": "enabled", 00:16:08.732 "thread": "nvmf_tgt_poll_group_000", 00:16:08.732 "listen_address": { 00:16:08.732 "trtype": "TCP", 00:16:08.732 "adrfam": "IPv4", 00:16:08.732 "traddr": "10.0.0.2", 00:16:08.732 "trsvcid": "4420" 00:16:08.732 }, 00:16:08.732 "peer_address": { 00:16:08.732 "trtype": "TCP", 00:16:08.732 "adrfam": "IPv4", 00:16:08.732 "traddr": "10.0.0.1", 00:16:08.732 "trsvcid": "43242" 00:16:08.732 }, 00:16:08.732 "auth": { 00:16:08.732 "state": "completed", 00:16:08.732 "digest": "sha512", 00:16:08.732 "dhgroup": "ffdhe3072" 00:16:08.732 } 00:16:08.732 } 00:16:08.732 ]' 00:16:08.732 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.990 17:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.247 17:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.201 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.500 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.758 00:16:10.758 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.758 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.758 17:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.015 { 00:16:11.015 "cntlid": 121, 00:16:11.015 "qid": 0, 00:16:11.015 "state": "enabled", 00:16:11.015 "thread": "nvmf_tgt_poll_group_000", 00:16:11.015 "listen_address": { 00:16:11.015 "trtype": "TCP", 00:16:11.015 "adrfam": "IPv4", 00:16:11.015 "traddr": "10.0.0.2", 00:16:11.015 "trsvcid": "4420" 00:16:11.015 }, 00:16:11.015 "peer_address": { 00:16:11.015 "trtype": "TCP", 00:16:11.015 "adrfam": "IPv4", 00:16:11.015 "traddr": "10.0.0.1", 00:16:11.015 "trsvcid": "43260" 00:16:11.015 }, 00:16:11.015 "auth": { 00:16:11.015 "state": "completed", 00:16:11.015 "digest": "sha512", 00:16:11.015 "dhgroup": "ffdhe4096" 00:16:11.015 } 00:16:11.015 } 00:16:11.015 ]' 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.015 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.273 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.273 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.273 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.273 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.273 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.530 17:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.467 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.723 17:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.980 00:16:12.980 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.980 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.980 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.238 { 00:16:13.238 "cntlid": 123, 00:16:13.238 "qid": 0, 00:16:13.238 "state": "enabled", 00:16:13.238 "thread": "nvmf_tgt_poll_group_000", 00:16:13.238 "listen_address": { 00:16:13.238 "trtype": "TCP", 00:16:13.238 "adrfam": "IPv4", 00:16:13.238 "traddr": "10.0.0.2", 00:16:13.238 "trsvcid": "4420" 00:16:13.238 }, 00:16:13.238 "peer_address": { 00:16:13.238 "trtype": "TCP", 00:16:13.238 "adrfam": "IPv4", 00:16:13.238 "traddr": "10.0.0.1", 00:16:13.238 "trsvcid": "43268" 00:16:13.238 }, 00:16:13.238 "auth": { 00:16:13.238 "state": "completed", 00:16:13.238 "digest": "sha512", 00:16:13.238 "dhgroup": "ffdhe4096" 00:16:13.238 } 00:16:13.238 } 00:16:13.238 ]' 00:16:13.238 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.495 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.752 17:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:16:14.686 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.686 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.686 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.686 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.686 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.687 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.687 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.687 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.944 17:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.201 00:16:15.202 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.202 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.202 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.459 { 00:16:15.459 "cntlid": 125, 00:16:15.459 "qid": 0, 00:16:15.459 "state": "enabled", 00:16:15.459 "thread": "nvmf_tgt_poll_group_000", 00:16:15.459 "listen_address": { 00:16:15.459 "trtype": "TCP", 00:16:15.459 "adrfam": "IPv4", 00:16:15.459 "traddr": "10.0.0.2", 00:16:15.459 "trsvcid": "4420" 00:16:15.459 }, 00:16:15.459 "peer_address": { 00:16:15.459 "trtype": "TCP", 00:16:15.459 "adrfam": "IPv4", 00:16:15.459 "traddr": "10.0.0.1", 00:16:15.459 "trsvcid": "43292" 00:16:15.459 }, 00:16:15.459 "auth": { 00:16:15.459 "state": "completed", 00:16:15.459 "digest": "sha512", 00:16:15.459 "dhgroup": "ffdhe4096" 00:16:15.459 } 00:16:15.459 } 00:16:15.459 ]' 00:16:15.459 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.716 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.973 17:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.908 17:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.187 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.444 00:16:17.444 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.444 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.444 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.702 { 00:16:17.702 "cntlid": 127, 00:16:17.702 "qid": 0, 00:16:17.702 "state": "enabled", 00:16:17.702 "thread": "nvmf_tgt_poll_group_000", 00:16:17.702 "listen_address": { 00:16:17.702 "trtype": "TCP", 00:16:17.702 "adrfam": "IPv4", 00:16:17.702 "traddr": "10.0.0.2", 00:16:17.702 "trsvcid": "4420" 00:16:17.702 }, 00:16:17.702 "peer_address": { 00:16:17.702 "trtype": "TCP", 00:16:17.702 "adrfam": "IPv4", 00:16:17.702 "traddr": "10.0.0.1", 00:16:17.702 "trsvcid": "60996" 00:16:17.702 }, 00:16:17.702 "auth": { 00:16:17.702 "state": "completed", 00:16:17.702 "digest": "sha512", 00:16:17.702 "dhgroup": "ffdhe4096" 00:16:17.702 } 00:16:17.702 } 00:16:17.702 ]' 00:16:17.702 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.959 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.960 17:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.216 17:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:19.150 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.407 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.972 00:16:19.972 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.972 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.972 17:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.230 { 00:16:20.230 "cntlid": 129, 00:16:20.230 "qid": 0, 00:16:20.230 "state": "enabled", 00:16:20.230 "thread": "nvmf_tgt_poll_group_000", 00:16:20.230 "listen_address": { 00:16:20.230 "trtype": "TCP", 00:16:20.230 "adrfam": "IPv4", 00:16:20.230 "traddr": "10.0.0.2", 00:16:20.230 "trsvcid": "4420" 00:16:20.230 }, 00:16:20.230 "peer_address": { 00:16:20.230 "trtype": "TCP", 00:16:20.230 "adrfam": "IPv4", 00:16:20.230 "traddr": "10.0.0.1", 00:16:20.230 "trsvcid": "32784" 00:16:20.230 }, 00:16:20.230 "auth": { 00:16:20.230 "state": "completed", 00:16:20.230 "digest": "sha512", 00:16:20.230 "dhgroup": "ffdhe6144" 00:16:20.230 } 00:16:20.230 } 00:16:20.230 ]' 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.230 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.489 17:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:21.423 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.681 17:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.246 00:16:22.246 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.246 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.246 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.503 { 00:16:22.503 "cntlid": 131, 00:16:22.503 "qid": 0, 00:16:22.503 "state": "enabled", 00:16:22.503 "thread": "nvmf_tgt_poll_group_000", 00:16:22.503 "listen_address": { 00:16:22.503 "trtype": "TCP", 00:16:22.503 "adrfam": "IPv4", 00:16:22.503 "traddr": "10.0.0.2", 00:16:22.503 "trsvcid": "4420" 00:16:22.503 }, 00:16:22.503 "peer_address": { 00:16:22.503 "trtype": "TCP", 00:16:22.503 "adrfam": "IPv4", 00:16:22.503 "traddr": "10.0.0.1", 00:16:22.503 "trsvcid": "32806" 00:16:22.503 }, 00:16:22.503 "auth": { 00:16:22.503 "state": "completed", 00:16:22.503 "digest": "sha512", 00:16:22.503 "dhgroup": "ffdhe6144" 00:16:22.503 } 00:16:22.503 } 00:16:22.503 ]' 00:16:22.503 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.761 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.018 17:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.992 17:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.250 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.817 00:16:24.817 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.817 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.817 17:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.075 { 00:16:25.075 "cntlid": 133, 00:16:25.075 "qid": 0, 00:16:25.075 "state": "enabled", 00:16:25.075 "thread": "nvmf_tgt_poll_group_000", 00:16:25.075 "listen_address": { 00:16:25.075 "trtype": "TCP", 00:16:25.075 "adrfam": "IPv4", 00:16:25.075 "traddr": "10.0.0.2", 00:16:25.075 "trsvcid": "4420" 00:16:25.075 }, 00:16:25.075 "peer_address": { 00:16:25.075 "trtype": "TCP", 00:16:25.075 "adrfam": "IPv4", 00:16:25.075 "traddr": "10.0.0.1", 00:16:25.075 "trsvcid": "32834" 00:16:25.075 }, 00:16:25.075 "auth": { 00:16:25.075 "state": "completed", 00:16:25.075 "digest": "sha512", 00:16:25.075 "dhgroup": "ffdhe6144" 00:16:25.075 } 00:16:25.075 } 00:16:25.075 ]' 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.075 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.334 17:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:16:26.269 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.269 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.269 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.269 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.527 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.527 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.527 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.527 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.786 17:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.351 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.351 17:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.608 { 00:16:27.608 "cntlid": 135, 00:16:27.608 "qid": 0, 00:16:27.608 "state": "enabled", 00:16:27.608 "thread": "nvmf_tgt_poll_group_000", 00:16:27.608 "listen_address": { 00:16:27.608 "trtype": "TCP", 00:16:27.608 "adrfam": "IPv4", 00:16:27.608 "traddr": "10.0.0.2", 00:16:27.608 "trsvcid": "4420" 00:16:27.608 }, 00:16:27.608 "peer_address": { 00:16:27.608 "trtype": "TCP", 00:16:27.608 "adrfam": "IPv4", 00:16:27.608 "traddr": "10.0.0.1", 00:16:27.608 "trsvcid": "53014" 00:16:27.608 }, 00:16:27.608 "auth": { 00:16:27.608 "state": "completed", 00:16:27.608 "digest": "sha512", 00:16:27.608 "dhgroup": "ffdhe6144" 00:16:27.608 } 00:16:27.608 } 00:16:27.608 ]' 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.608 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.864 17:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.798 17:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.056 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.989 00:16:29.989 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.989 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.989 17:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.247 { 00:16:30.247 "cntlid": 137, 00:16:30.247 "qid": 0, 00:16:30.247 "state": "enabled", 00:16:30.247 "thread": "nvmf_tgt_poll_group_000", 00:16:30.247 "listen_address": { 00:16:30.247 "trtype": "TCP", 00:16:30.247 "adrfam": "IPv4", 00:16:30.247 "traddr": "10.0.0.2", 00:16:30.247 "trsvcid": "4420" 00:16:30.247 }, 00:16:30.247 "peer_address": { 00:16:30.247 "trtype": "TCP", 00:16:30.247 "adrfam": "IPv4", 00:16:30.247 "traddr": "10.0.0.1", 00:16:30.247 "trsvcid": "53044" 00:16:30.247 }, 00:16:30.247 "auth": { 00:16:30.247 "state": "completed", 00:16:30.247 "digest": "sha512", 00:16:30.247 "dhgroup": "ffdhe8192" 00:16:30.247 } 00:16:30.247 } 00:16:30.247 ]' 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.247 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.506 17:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.440 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.698 17:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.631 00:16:32.631 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.631 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.631 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.889 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.889 { 00:16:32.889 "cntlid": 139, 00:16:32.889 "qid": 0, 00:16:32.889 "state": "enabled", 00:16:32.889 "thread": "nvmf_tgt_poll_group_000", 00:16:32.889 "listen_address": { 00:16:32.889 "trtype": "TCP", 00:16:32.889 "adrfam": "IPv4", 00:16:32.889 "traddr": "10.0.0.2", 00:16:32.889 "trsvcid": "4420" 00:16:32.889 }, 00:16:32.889 "peer_address": { 00:16:32.889 "trtype": "TCP", 00:16:32.889 "adrfam": "IPv4", 00:16:32.889 "traddr": "10.0.0.1", 00:16:32.889 "trsvcid": "53070" 00:16:32.889 }, 00:16:32.889 "auth": { 00:16:32.889 "state": "completed", 00:16:32.889 "digest": "sha512", 00:16:32.889 "dhgroup": "ffdhe8192" 00:16:32.889 } 00:16:32.889 } 00:16:32.889 ]' 00:16:32.890 17:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.890 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.890 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.147 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.147 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.147 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.147 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.147 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.407 17:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OGMwZTNlNzBhODIzMTllMjYzN2I1OWMyMzYyMjFjYTRjVQa3: --dhchap-ctrl-secret DHHC-1:02:NzdjYjc5MjBjMWQ4MDllMWUxYTJlZDY1ZDBhODhjMTRmNjBmODFjNzdiZGQwZDhi4J+fUQ==: 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.341 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.599 17:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.535 00:16:35.535 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.535 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.535 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.793 { 00:16:35.793 "cntlid": 141, 00:16:35.793 "qid": 0, 00:16:35.793 "state": "enabled", 00:16:35.793 "thread": "nvmf_tgt_poll_group_000", 00:16:35.793 "listen_address": { 00:16:35.793 "trtype": "TCP", 00:16:35.793 "adrfam": "IPv4", 00:16:35.793 "traddr": "10.0.0.2", 00:16:35.793 "trsvcid": "4420" 00:16:35.793 }, 00:16:35.793 "peer_address": { 00:16:35.793 "trtype": "TCP", 00:16:35.793 "adrfam": "IPv4", 00:16:35.793 "traddr": "10.0.0.1", 00:16:35.793 "trsvcid": "53092" 00:16:35.793 }, 00:16:35.793 "auth": { 00:16:35.793 "state": "completed", 00:16:35.793 "digest": "sha512", 00:16:35.793 "dhgroup": "ffdhe8192" 00:16:35.793 } 00:16:35.793 } 00:16:35.793 ]' 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.793 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.050 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.050 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.050 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.050 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.050 17:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.319 17:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjE5MWQyODcwODlkY2Y2ZmRkNmE2MGUzMzY5OGRmOGNmYmFmZDBmYWY3YzExYWE00sot2w==: --dhchap-ctrl-secret DHHC-1:01:Y2NlYThjNGNjYmU5YmQ2NDc5ZmJiN2U0YjFiZDg3NDRdPdyM: 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.254 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.512 17:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.478 00:16:38.478 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.478 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.478 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.735 { 00:16:38.735 "cntlid": 143, 00:16:38.735 "qid": 0, 00:16:38.735 "state": "enabled", 00:16:38.735 "thread": "nvmf_tgt_poll_group_000", 00:16:38.735 "listen_address": { 00:16:38.735 "trtype": "TCP", 00:16:38.735 "adrfam": "IPv4", 00:16:38.735 "traddr": "10.0.0.2", 00:16:38.735 "trsvcid": "4420" 00:16:38.735 }, 00:16:38.735 "peer_address": { 00:16:38.735 "trtype": "TCP", 00:16:38.735 "adrfam": "IPv4", 00:16:38.735 "traddr": "10.0.0.1", 00:16:38.735 "trsvcid": "50830" 00:16:38.735 }, 00:16:38.735 "auth": { 00:16:38.735 "state": "completed", 00:16:38.735 "digest": "sha512", 00:16:38.735 "dhgroup": "ffdhe8192" 00:16:38.735 } 00:16:38.735 } 00:16:38.735 ]' 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.735 17:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.993 17:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.925 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.545 17:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.110 00:16:41.110 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.110 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.110 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.367 { 00:16:41.367 "cntlid": 145, 00:16:41.367 "qid": 0, 00:16:41.367 "state": "enabled", 00:16:41.367 "thread": "nvmf_tgt_poll_group_000", 00:16:41.367 "listen_address": { 00:16:41.367 "trtype": "TCP", 00:16:41.367 "adrfam": "IPv4", 00:16:41.367 "traddr": "10.0.0.2", 00:16:41.367 "trsvcid": "4420" 00:16:41.367 }, 00:16:41.367 "peer_address": { 00:16:41.367 "trtype": "TCP", 00:16:41.367 "adrfam": "IPv4", 00:16:41.367 "traddr": "10.0.0.1", 00:16:41.367 "trsvcid": "50842" 00:16:41.367 }, 00:16:41.367 "auth": { 00:16:41.367 "state": "completed", 00:16:41.367 "digest": "sha512", 00:16:41.367 "dhgroup": "ffdhe8192" 00:16:41.367 } 00:16:41.367 } 00:16:41.367 ]' 00:16:41.367 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.623 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.882 17:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWQzMjJkNTllZDEyMDNiNzUyNmYyYjU1MmUxNTM1M2FiMTc2MjE0NjdkNzQ1NzZjcMaFuA==: --dhchap-ctrl-secret DHHC-1:03:MGFmMGRkYmQyMzBmMTMwYmU1NWFmM2Y2NmVkNWI0MGRmMTM5MmE1ZmQzMzY0Y2MzOGIzNmI3YzBiOWJkNDcwNz84MPg=: 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.817 17:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.753 request: 00:16:43.753 { 00:16:43.753 "name": "nvme0", 00:16:43.753 "trtype": "tcp", 00:16:43.753 "traddr": "10.0.0.2", 00:16:43.753 "adrfam": "ipv4", 00:16:43.753 "trsvcid": "4420", 00:16:43.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:43.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:43.753 "prchk_reftag": false, 00:16:43.753 "prchk_guard": false, 00:16:43.753 "hdgst": false, 00:16:43.753 "ddgst": false, 00:16:43.753 "dhchap_key": "key2", 00:16:43.753 "method": "bdev_nvme_attach_controller", 00:16:43.753 "req_id": 1 00:16:43.753 } 00:16:43.753 Got JSON-RPC error response 00:16:43.753 response: 00:16:43.753 { 00:16:43.753 "code": -5, 00:16:43.753 "message": "Input/output error" 00:16:43.753 } 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:43.753 17:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:44.683 request: 00:16:44.683 { 00:16:44.683 "name": "nvme0", 00:16:44.683 "trtype": "tcp", 00:16:44.683 "traddr": "10.0.0.2", 00:16:44.683 "adrfam": "ipv4", 00:16:44.683 "trsvcid": "4420", 00:16:44.683 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:44.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:44.683 "prchk_reftag": false, 00:16:44.683 "prchk_guard": false, 00:16:44.683 "hdgst": false, 00:16:44.683 "ddgst": false, 00:16:44.683 "dhchap_key": "key1", 00:16:44.683 "dhchap_ctrlr_key": "ckey2", 00:16:44.683 "method": "bdev_nvme_attach_controller", 00:16:44.683 "req_id": 1 00:16:44.683 } 00:16:44.683 Got JSON-RPC error response 00:16:44.683 response: 00:16:44.683 { 00:16:44.683 "code": -5, 00:16:44.683 "message": "Input/output error" 00:16:44.683 } 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.683 17:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.684 17:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.249 request: 00:16:45.249 { 00:16:45.249 "name": "nvme0", 00:16:45.249 "trtype": "tcp", 00:16:45.249 "traddr": "10.0.0.2", 00:16:45.249 "adrfam": "ipv4", 00:16:45.249 "trsvcid": "4420", 00:16:45.249 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:45.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.249 "prchk_reftag": false, 00:16:45.249 "prchk_guard": false, 00:16:45.249 "hdgst": false, 00:16:45.249 "ddgst": false, 00:16:45.249 "dhchap_key": "key1", 00:16:45.249 "dhchap_ctrlr_key": "ckey1", 00:16:45.249 "method": "bdev_nvme_attach_controller", 00:16:45.249 "req_id": 1 00:16:45.249 } 00:16:45.249 Got JSON-RPC error response 00:16:45.249 response: 00:16:45.249 { 00:16:45.249 "code": -5, 00:16:45.249 "message": "Input/output error" 00:16:45.249 } 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2223056 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2223056 ']' 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2223056 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.249 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223056 00:16:45.507 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:45.507 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:45.507 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223056' 00:16:45.507 killing process with pid 2223056 00:16:45.507 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2223056 00:16:45.507 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2223056 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2245728 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2245728 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2245728 ']' 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.764 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2245728 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2245728 ']' 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.028 17:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.285 17:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.216 00:16:47.216 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.216 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.216 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.472 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.472 { 00:16:47.472 "cntlid": 1, 00:16:47.472 "qid": 0, 00:16:47.472 "state": "enabled", 00:16:47.472 "thread": "nvmf_tgt_poll_group_000", 00:16:47.472 "listen_address": { 00:16:47.472 "trtype": "TCP", 00:16:47.473 "adrfam": "IPv4", 00:16:47.473 "traddr": "10.0.0.2", 00:16:47.473 "trsvcid": "4420" 00:16:47.473 }, 00:16:47.473 "peer_address": { 00:16:47.473 "trtype": "TCP", 00:16:47.473 "adrfam": "IPv4", 00:16:47.473 "traddr": "10.0.0.1", 00:16:47.473 "trsvcid": "47986" 00:16:47.473 }, 00:16:47.473 "auth": { 00:16:47.473 "state": "completed", 00:16:47.473 "digest": "sha512", 00:16:47.473 "dhgroup": "ffdhe8192" 00:16:47.473 } 00:16:47.473 } 00:16:47.473 ]' 00:16:47.473 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.473 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.473 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.473 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.473 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.729 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.729 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.729 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.986 17:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODEzODIyZmI3MDVlODZkMzFjN2RjMDhjYzBlMmYxMDhjYTQ5YWFhZDc1MGUwZDZiYmQxYTcwMTEyNDljMDgzNYpASIo=: 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:48.916 17:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:49.173 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.173 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:49.173 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.173 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:49.173 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.174 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:49.174 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.174 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.174 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.430 request: 00:16:49.430 { 00:16:49.430 "name": "nvme0", 00:16:49.430 "trtype": "tcp", 00:16:49.430 "traddr": "10.0.0.2", 00:16:49.430 "adrfam": "ipv4", 00:16:49.430 "trsvcid": "4420", 00:16:49.430 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.431 "prchk_reftag": false, 00:16:49.431 "prchk_guard": false, 00:16:49.431 "hdgst": false, 00:16:49.431 "ddgst": false, 00:16:49.431 "dhchap_key": "key3", 00:16:49.431 "method": "bdev_nvme_attach_controller", 00:16:49.431 "req_id": 1 00:16:49.431 } 00:16:49.431 Got JSON-RPC error response 00:16:49.431 response: 00:16:49.431 { 00:16:49.431 "code": -5, 00:16:49.431 "message": "Input/output error" 00:16:49.431 } 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:49.431 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.688 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.946 request: 00:16:49.946 { 00:16:49.946 "name": "nvme0", 00:16:49.946 "trtype": "tcp", 00:16:49.946 "traddr": "10.0.0.2", 00:16:49.946 "adrfam": "ipv4", 00:16:49.946 "trsvcid": "4420", 00:16:49.946 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.946 "prchk_reftag": false, 00:16:49.946 "prchk_guard": false, 00:16:49.946 "hdgst": false, 00:16:49.946 "ddgst": false, 00:16:49.946 "dhchap_key": "key3", 00:16:49.946 "method": "bdev_nvme_attach_controller", 00:16:49.946 "req_id": 1 00:16:49.946 } 00:16:49.946 Got JSON-RPC error response 00:16:49.946 response: 00:16:49.946 { 00:16:49.946 "code": -5, 00:16:49.946 "message": "Input/output error" 00:16:49.946 } 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.946 17:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.204 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.462 request: 00:16:50.462 { 00:16:50.462 "name": "nvme0", 00:16:50.462 "trtype": "tcp", 00:16:50.462 "traddr": "10.0.0.2", 00:16:50.462 "adrfam": "ipv4", 00:16:50.462 "trsvcid": "4420", 00:16:50.462 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:50.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.462 "prchk_reftag": false, 00:16:50.462 "prchk_guard": false, 00:16:50.462 "hdgst": false, 00:16:50.462 "ddgst": false, 00:16:50.462 "dhchap_key": "key0", 00:16:50.462 "dhchap_ctrlr_key": "key1", 00:16:50.462 "method": "bdev_nvme_attach_controller", 00:16:50.462 "req_id": 1 00:16:50.462 } 00:16:50.462 Got JSON-RPC error response 00:16:50.462 response: 00:16:50.462 { 00:16:50.462 "code": -5, 00:16:50.462 "message": "Input/output error" 00:16:50.462 } 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:50.462 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:50.719 00:16:50.719 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:50.719 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:50.719 17:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.976 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.977 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.977 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2223080 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2223080 ']' 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2223080 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223080 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223080' 00:16:51.236 killing process with pid 2223080 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2223080 00:16:51.236 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2223080 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.805 rmmod nvme_tcp 00:16:51.805 rmmod nvme_fabrics 00:16:51.805 rmmod nvme_keyring 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2245728 ']' 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2245728 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2245728 ']' 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2245728 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.805 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2245728 00:16:51.806 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.806 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.806 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2245728' 00:16:51.806 killing process with pid 2245728 00:16:51.806 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2245728 00:16:51.806 17:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2245728 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.105 17:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.009 17:39:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.009 17:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DmJ /tmp/spdk.key-sha256.SmZ /tmp/spdk.key-sha384.rMB /tmp/spdk.key-sha512.AMU /tmp/spdk.key-sha512.xFz /tmp/spdk.key-sha384.t1v /tmp/spdk.key-sha256.c56 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:54.009 00:16:54.009 real 3m9.448s 00:16:54.009 user 7m20.960s 00:16:54.009 sys 0m24.984s 00:16:54.009 17:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.009 17:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.009 ************************************ 00:16:54.009 END TEST nvmf_auth_target 00:16:54.009 ************************************ 00:16:54.009 17:39:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:54.009 17:39:49 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:54.009 17:39:49 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:54.009 17:39:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:54.009 17:39:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.009 17:39:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.267 ************************************ 00:16:54.267 START TEST nvmf_bdevio_no_huge 00:16:54.267 ************************************ 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:54.267 * Looking for test storage... 00:16:54.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.267 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.268 17:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:56.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:56.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.167 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:56.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:56.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.168 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:16:56.425 00:16:56.425 --- 10.0.0.2 ping statistics --- 00:16:56.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.425 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:16:56.425 00:16:56.425 --- 10.0.0.1 ping statistics --- 00:16:56.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.425 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2248493 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2248493 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2248493 ']' 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.425 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.425 [2024-07-15 17:39:51.478398] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:56.425 [2024-07-15 17:39:51.478472] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:56.425 [2024-07-15 17:39:51.547848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.683 [2024-07-15 17:39:51.660246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.683 [2024-07-15 17:39:51.660304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.683 [2024-07-15 17:39:51.660325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.683 [2024-07-15 17:39:51.660343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.683 [2024-07-15 17:39:51.660358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.683 [2024-07-15 17:39:51.660511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:56.683 [2024-07-15 17:39:51.660576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:56.683 [2024-07-15 17:39:51.660646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:56.683 [2024-07-15 17:39:51.660654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 [2024-07-15 17:39:51.786219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 Malloc0 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.683 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 [2024-07-15 17:39:51.824576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.978 { 00:16:56.978 "params": { 00:16:56.978 "name": "Nvme$subsystem", 00:16:56.978 "trtype": "$TEST_TRANSPORT", 00:16:56.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.978 "adrfam": "ipv4", 00:16:56.978 "trsvcid": "$NVMF_PORT", 00:16:56.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.978 "hdgst": ${hdgst:-false}, 00:16:56.978 "ddgst": ${ddgst:-false} 00:16:56.978 }, 00:16:56.978 "method": "bdev_nvme_attach_controller" 00:16:56.978 } 00:16:56.978 EOF 00:16:56.978 )") 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:56.978 17:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:56.978 "params": { 00:16:56.978 "name": "Nvme1", 00:16:56.978 "trtype": "tcp", 00:16:56.978 "traddr": "10.0.0.2", 00:16:56.978 "adrfam": "ipv4", 00:16:56.978 "trsvcid": "4420", 00:16:56.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.978 "hdgst": false, 00:16:56.978 "ddgst": false 00:16:56.978 }, 00:16:56.978 "method": "bdev_nvme_attach_controller" 00:16:56.978 }' 00:16:56.978 [2024-07-15 17:39:51.872639] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:16:56.978 [2024-07-15 17:39:51.872710] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2248524 ] 00:16:56.978 [2024-07-15 17:39:51.935345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.978 [2024-07-15 17:39:52.052847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.978 [2024-07-15 17:39:52.052903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.978 [2024-07-15 17:39:52.052908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.236 I/O targets: 00:16:57.236 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:57.236 00:16:57.236 00:16:57.236 CUnit - A unit testing framework for C - Version 2.1-3 00:16:57.236 http://cunit.sourceforge.net/ 00:16:57.236 00:16:57.236 00:16:57.236 Suite: bdevio tests on: Nvme1n1 00:16:57.236 Test: blockdev write read block ...passed 00:16:57.236 Test: blockdev write zeroes read block ...passed 00:16:57.236 Test: blockdev write zeroes read no split ...passed 00:16:57.493 Test: blockdev write zeroes read split ...passed 00:16:57.493 Test: blockdev write zeroes read split partial ...passed 00:16:57.493 Test: blockdev reset ...[2024-07-15 17:39:52.474343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:57.493 [2024-07-15 17:39:52.474454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176fb0 (9): Bad file descriptor 00:16:57.493 [2024-07-15 17:39:52.528897] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:57.493 passed 00:16:57.493 Test: blockdev write read 8 blocks ...passed 00:16:57.493 Test: blockdev write read size > 128k ...passed 00:16:57.493 Test: blockdev write read invalid size ...passed 00:16:57.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.493 Test: blockdev write read max offset ...passed 00:16:57.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.751 Test: blockdev writev readv 8 blocks ...passed 00:16:57.751 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.751 Test: blockdev writev readv block ...passed 00:16:57.751 Test: blockdev writev readv size > 128k ...passed 00:16:57.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.751 Test: blockdev comparev and writev ...[2024-07-15 17:39:52.705454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.705490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.705515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.705533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.705923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.705948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.705969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.705985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.706383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.706406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.706428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.706444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.706818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.706840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.706861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:57.751 [2024-07-15 17:39:52.706884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:57.751 passed 00:16:57.751 Test: blockdev nvme passthru rw ...passed 00:16:57.751 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:39:52.791245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:57.751 [2024-07-15 17:39:52.791272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.791473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:57.751 [2024-07-15 17:39:52.791496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.791703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:57.751 [2024-07-15 17:39:52.791731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:57.751 [2024-07-15 17:39:52.791940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:57.751 [2024-07-15 17:39:52.791963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:57.751 passed 00:16:57.751 Test: blockdev nvme admin passthru ...passed 00:16:57.751 Test: blockdev copy ...passed 00:16:57.751 00:16:57.751 Run Summary: Type Total Ran Passed Failed Inactive 00:16:57.751 suites 1 1 n/a 0 0 00:16:57.751 tests 23 23 23 0 0 00:16:57.751 asserts 152 152 152 0 n/a 00:16:57.751 00:16:57.751 Elapsed time = 1.175 seconds 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.316 rmmod nvme_tcp 00:16:58.316 rmmod nvme_fabrics 00:16:58.316 rmmod nvme_keyring 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2248493 ']' 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2248493 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2248493 ']' 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2248493 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2248493 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2248493' 00:16:58.316 killing process with pid 2248493 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2248493 00:16:58.316 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2248493 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.575 17:39:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.107 17:39:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.107 00:17:01.107 real 0m6.586s 00:17:01.107 user 0m10.660s 00:17:01.107 sys 0m2.535s 00:17:01.107 17:39:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.107 17:39:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 ************************************ 00:17:01.107 END TEST nvmf_bdevio_no_huge 00:17:01.107 ************************************ 00:17:01.107 17:39:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:01.107 17:39:55 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:01.107 17:39:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:01.107 17:39:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.107 17:39:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 ************************************ 00:17:01.107 START TEST nvmf_tls 00:17:01.107 ************************************ 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:01.107 * Looking for test storage... 00:17:01.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.107 17:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:03.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:03.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:03.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:03.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:03.008 00:17:03.008 --- 10.0.0.2 ping statistics --- 00:17:03.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.008 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:03.008 00:17:03.008 --- 10.0.0.1 ping statistics --- 00:17:03.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.008 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2250711 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2250711 00:17:03.008 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2250711 ']' 00:17:03.009 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.009 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.009 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.009 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.009 17:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.009 [2024-07-15 17:39:58.017631] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:03.009 [2024-07-15 17:39:58.017709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.009 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.009 [2024-07-15 17:39:58.082790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.267 [2024-07-15 17:39:58.189001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.267 [2024-07-15 17:39:58.189053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.267 [2024-07-15 17:39:58.189082] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.267 [2024-07-15 17:39:58.189092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.267 [2024-07-15 17:39:58.189102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.267 [2024-07-15 17:39:58.189133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:03.267 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:03.524 true 00:17:03.524 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:03.524 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:03.782 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:03.782 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:03.782 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:04.040 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:04.040 17:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:04.298 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:04.298 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:04.298 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:04.556 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:04.556 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:04.814 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:04.814 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:04.814 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:04.814 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:05.077 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:05.077 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:05.077 17:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:05.341 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:05.341 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:05.598 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:05.598 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:05.598 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:05.856 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:05.856 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:06.112 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:06.113 17:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.sznY3dPxBe 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.4DthwKbOVP 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sznY3dPxBe 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4DthwKbOVP 00:17:06.113 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:06.369 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:06.932 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sznY3dPxBe 00:17:06.932 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sznY3dPxBe 00:17:06.932 17:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:06.932 [2024-07-15 17:40:02.013932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.932 17:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:07.523 17:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:07.523 [2024-07-15 17:40:02.603476] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.523 [2024-07-15 17:40:02.603711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.523 17:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:07.781 malloc0 00:17:07.781 17:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:08.038 17:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sznY3dPxBe 00:17:08.294 [2024-07-15 17:40:03.405357] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:08.294 17:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sznY3dPxBe 00:17:08.550 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.518 Initializing NVMe Controllers 00:17:18.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.518 Initialization complete. Launching workers. 00:17:18.518 ======================================================== 00:17:18.518 Latency(us) 00:17:18.518 Device Information : IOPS MiB/s Average min max 00:17:18.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7761.58 30.32 8248.44 1279.55 9655.25 00:17:18.518 ======================================================== 00:17:18.518 Total : 7761.58 30.32 8248.44 1279.55 9655.25 00:17:18.518 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sznY3dPxBe 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sznY3dPxBe' 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2252503 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2252503 /var/tmp/bdevperf.sock 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2252503 ']' 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.518 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.518 [2024-07-15 17:40:13.583853] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:18.518 [2024-07-15 17:40:13.583971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252503 ] 00:17:18.518 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.518 [2024-07-15 17:40:13.642290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.776 [2024-07-15 17:40:13.747912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.776 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.776 17:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:18.776 17:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sznY3dPxBe 00:17:19.033 [2024-07-15 17:40:14.105055] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.033 [2024-07-15 17:40:14.105170] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.294 TLSTESTn1 00:17:19.294 17:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:19.294 Running I/O for 10 seconds... 00:17:29.253 00:17:29.253 Latency(us) 00:17:29.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:29.253 Verification LBA range: start 0x0 length 0x2000 00:17:29.253 TLSTESTn1 : 10.05 2387.17 9.32 0.00 0.00 53476.20 6990.51 84662.80 00:17:29.253 =================================================================================================================== 00:17:29.253 Total : 2387.17 9.32 0.00 0.00 53476.20 6990.51 84662.80 00:17:29.253 0 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2252503 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2252503 ']' 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2252503 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2252503 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2252503' 00:17:29.510 killing process with pid 2252503 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2252503 00:17:29.510 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.510 00:17:29.510 Latency(us) 00:17:29.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.510 =================================================================================================================== 00:17:29.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.510 [2024-07-15 17:40:24.429216] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:29.510 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2252503 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4DthwKbOVP 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4DthwKbOVP 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4DthwKbOVP 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4DthwKbOVP' 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2253816 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2253816 /var/tmp/bdevperf.sock 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2253816 ']' 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.767 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 [2024-07-15 17:40:24.710632] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:29.767 [2024-07-15 17:40:24.710725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253816 ] 00:17:29.767 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.767 [2024-07-15 17:40:24.769934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.767 [2024-07-15 17:40:24.876843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.025 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.025 17:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:30.025 17:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4DthwKbOVP 00:17:30.283 [2024-07-15 17:40:25.202873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.283 [2024-07-15 17:40:25.203007] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:30.283 [2024-07-15 17:40:25.213150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:30.283 [2024-07-15 17:40:25.213820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd46f90 (107): Transport endpoint is not connected 00:17:30.283 [2024-07-15 17:40:25.214810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd46f90 (9): Bad file descriptor 00:17:30.283 [2024-07-15 17:40:25.215811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:30.283 [2024-07-15 17:40:25.215833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:30.283 [2024-07-15 17:40:25.215850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:30.283 request: 00:17:30.283 { 00:17:30.283 "name": "TLSTEST", 00:17:30.283 "trtype": "tcp", 00:17:30.283 "traddr": "10.0.0.2", 00:17:30.283 "adrfam": "ipv4", 00:17:30.283 "trsvcid": "4420", 00:17:30.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.283 "prchk_reftag": false, 00:17:30.283 "prchk_guard": false, 00:17:30.283 "hdgst": false, 00:17:30.283 "ddgst": false, 00:17:30.283 "psk": "/tmp/tmp.4DthwKbOVP", 00:17:30.283 "method": "bdev_nvme_attach_controller", 00:17:30.283 "req_id": 1 00:17:30.283 } 00:17:30.283 Got JSON-RPC error response 00:17:30.283 response: 00:17:30.283 { 00:17:30.283 "code": -5, 00:17:30.283 "message": "Input/output error" 00:17:30.283 } 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2253816 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2253816 ']' 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2253816 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2253816 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2253816' 00:17:30.283 killing process with pid 2253816 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2253816 00:17:30.283 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.283 00:17:30.283 Latency(us) 00:17:30.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.283 =================================================================================================================== 00:17:30.283 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.283 [2024-07-15 17:40:25.269493] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:30.283 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2253816 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sznY3dPxBe 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sznY3dPxBe 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sznY3dPxBe 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sznY3dPxBe' 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2253952 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2253952 /var/tmp/bdevperf.sock 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2253952 ']' 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.541 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.541 [2024-07-15 17:40:25.564921] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:30.541 [2024-07-15 17:40:25.565013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253952 ] 00:17:30.541 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.541 [2024-07-15 17:40:25.623072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.798 [2024-07-15 17:40:25.725502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.798 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.798 17:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:30.798 17:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sznY3dPxBe 00:17:31.056 [2024-07-15 17:40:26.078709] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.056 [2024-07-15 17:40:26.078824] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:31.056 [2024-07-15 17:40:26.085359] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.056 [2024-07-15 17:40:26.085390] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.056 [2024-07-15 17:40:26.085451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.056 [2024-07-15 17:40:26.085777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124f90 (107): Transport endpoint is not connected 00:17:31.056 [2024-07-15 17:40:26.086767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1124f90 (9): Bad file descriptor 00:17:31.056 [2024-07-15 17:40:26.087767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:31.056 [2024-07-15 17:40:26.087787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:31.056 [2024-07-15 17:40:26.087813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:31.056 request: 00:17:31.056 { 00:17:31.056 "name": "TLSTEST", 00:17:31.056 "trtype": "tcp", 00:17:31.056 "traddr": "10.0.0.2", 00:17:31.056 "adrfam": "ipv4", 00:17:31.056 "trsvcid": "4420", 00:17:31.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.056 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:31.056 "prchk_reftag": false, 00:17:31.056 "prchk_guard": false, 00:17:31.056 "hdgst": false, 00:17:31.056 "ddgst": false, 00:17:31.056 "psk": "/tmp/tmp.sznY3dPxBe", 00:17:31.056 "method": "bdev_nvme_attach_controller", 00:17:31.056 "req_id": 1 00:17:31.056 } 00:17:31.056 Got JSON-RPC error response 00:17:31.056 response: 00:17:31.056 { 00:17:31.056 "code": -5, 00:17:31.056 "message": "Input/output error" 00:17:31.056 } 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2253952 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2253952 ']' 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2253952 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2253952 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2253952' 00:17:31.056 killing process with pid 2253952 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2253952 00:17:31.056 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.056 00:17:31.056 Latency(us) 00:17:31.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.056 =================================================================================================================== 00:17:31.056 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.056 [2024-07-15 17:40:26.132959] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:31.056 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2253952 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sznY3dPxBe 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sznY3dPxBe 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sznY3dPxBe 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sznY3dPxBe' 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2253979 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2253979 /var/tmp/bdevperf.sock 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2253979 ']' 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.314 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.314 [2024-07-15 17:40:26.408788] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:31.314 [2024-07-15 17:40:26.408889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253979 ] 00:17:31.314 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.572 [2024-07-15 17:40:26.469202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.572 [2024-07-15 17:40:26.579128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.572 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.572 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:31.572 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sznY3dPxBe 00:17:31.829 [2024-07-15 17:40:26.915622] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.829 [2024-07-15 17:40:26.915753] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:31.829 [2024-07-15 17:40:26.927513] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:31.829 [2024-07-15 17:40:26.927546] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:31.829 [2024-07-15 17:40:26.927668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.829 [2024-07-15 17:40:26.927674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.829 [2024-07-15 17:40:26.928658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621f90 (9): Bad file descriptor 00:17:31.829 [2024-07-15 17:40:26.929657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:31.829 [2024-07-15 17:40:26.929677] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:31.829 [2024-07-15 17:40:26.929693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:31.829 request: 00:17:31.829 { 00:17:31.829 "name": "TLSTEST", 00:17:31.829 "trtype": "tcp", 00:17:31.829 "traddr": "10.0.0.2", 00:17:31.829 "adrfam": "ipv4", 00:17:31.829 "trsvcid": "4420", 00:17:31.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:31.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.829 "prchk_reftag": false, 00:17:31.829 "prchk_guard": false, 00:17:31.829 "hdgst": false, 00:17:31.829 "ddgst": false, 00:17:31.829 "psk": "/tmp/tmp.sznY3dPxBe", 00:17:31.829 "method": "bdev_nvme_attach_controller", 00:17:31.829 "req_id": 1 00:17:31.829 } 00:17:31.829 Got JSON-RPC error response 00:17:31.829 response: 00:17:31.829 { 00:17:31.829 "code": -5, 00:17:31.829 "message": "Input/output error" 00:17:31.829 } 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2253979 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2253979 ']' 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2253979 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.829 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2253979 00:17:32.088 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:32.088 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:32.088 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2253979' 00:17:32.088 killing process with pid 2253979 00:17:32.088 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2253979 00:17:32.088 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.088 00:17:32.088 Latency(us) 00:17:32.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.088 =================================================================================================================== 00:17:32.088 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.088 [2024-07-15 17:40:26.982024] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.088 17:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2253979 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2254112 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2254112 /var/tmp/bdevperf.sock 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2254112 ']' 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.347 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.347 [2024-07-15 17:40:27.284453] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:32.347 [2024-07-15 17:40:27.284543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254112 ] 00:17:32.347 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.347 [2024-07-15 17:40:27.343945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.347 [2024-07-15 17:40:27.447799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.606 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.606 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:32.606 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:32.864 [2024-07-15 17:40:27.840019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.864 [2024-07-15 17:40:27.841503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8770 (9): Bad file descriptor 00:17:32.864 [2024-07-15 17:40:27.842498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.864 [2024-07-15 17:40:27.842520] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:32.864 [2024-07-15 17:40:27.842538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.864 request: 00:17:32.864 { 00:17:32.864 "name": "TLSTEST", 00:17:32.864 "trtype": "tcp", 00:17:32.864 "traddr": "10.0.0.2", 00:17:32.864 "adrfam": "ipv4", 00:17:32.864 "trsvcid": "4420", 00:17:32.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.864 "prchk_reftag": false, 00:17:32.864 "prchk_guard": false, 00:17:32.864 "hdgst": false, 00:17:32.864 "ddgst": false, 00:17:32.864 "method": "bdev_nvme_attach_controller", 00:17:32.864 "req_id": 1 00:17:32.864 } 00:17:32.864 Got JSON-RPC error response 00:17:32.864 response: 00:17:32.864 { 00:17:32.864 "code": -5, 00:17:32.864 "message": "Input/output error" 00:17:32.864 } 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2254112 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2254112 ']' 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2254112 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254112 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254112' 00:17:32.864 killing process with pid 2254112 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2254112 00:17:32.864 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.864 00:17:32.864 Latency(us) 00:17:32.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.864 =================================================================================================================== 00:17:32.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.864 17:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2254112 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2250711 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2250711 ']' 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2250711 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250711 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250711' 00:17:33.122 killing process with pid 2250711 00:17:33.122 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2250711 00:17:33.122 [2024-07-15 17:40:28.189033] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:33.123 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2250711 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:33.382 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.zjhFYg5Uq0 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.zjhFYg5Uq0 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2254265 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2254265 00:17:33.641 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2254265 ']' 00:17:33.642 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.642 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.642 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.642 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.642 17:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.642 [2024-07-15 17:40:28.595372] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:33.642 [2024-07-15 17:40:28.595479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.642 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.642 [2024-07-15 17:40:28.665611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.907 [2024-07-15 17:40:28.786147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.907 [2024-07-15 17:40:28.786211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.907 [2024-07-15 17:40:28.786240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.907 [2024-07-15 17:40:28.786253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.907 [2024-07-15 17:40:28.786266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.907 [2024-07-15 17:40:28.786301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zjhFYg5Uq0 00:17:34.516 17:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.775 [2024-07-15 17:40:29.815298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.775 17:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.034 17:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.291 [2024-07-15 17:40:30.404824] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.291 [2024-07-15 17:40:30.405102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.291 17:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.857 malloc0 00:17:35.857 17:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.857 17:40:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:36.115 [2024-07-15 17:40:31.218625] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zjhFYg5Uq0 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zjhFYg5Uq0' 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2254678 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2254678 /var/tmp/bdevperf.sock 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2254678 ']' 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.115 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.374 [2024-07-15 17:40:31.284623] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:36.374 [2024-07-15 17:40:31.284712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254678 ] 00:17:36.374 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.374 [2024-07-15 17:40:31.342471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.374 [2024-07-15 17:40:31.450307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.632 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.632 17:40:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:36.632 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:36.890 [2024-07-15 17:40:31.828260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.890 [2024-07-15 17:40:31.828393] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:36.890 TLSTESTn1 00:17:36.890 17:40:31 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:37.153 Running I/O for 10 seconds... 00:17:47.119 00:17:47.119 Latency(us) 00:17:47.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.119 Verification LBA range: start 0x0 length 0x2000 00:17:47.119 TLSTESTn1 : 10.05 2328.65 9.10 0.00 0.00 54815.24 6189.51 92041.67 00:17:47.119 =================================================================================================================== 00:17:47.119 Total : 2328.65 9.10 0.00 0.00 54815.24 6189.51 92041.67 00:17:47.119 0 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2254678 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2254678 ']' 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2254678 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254678 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254678' 00:17:47.119 killing process with pid 2254678 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2254678 00:17:47.119 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.119 00:17:47.119 Latency(us) 00:17:47.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.119 =================================================================================================================== 00:17:47.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.119 [2024-07-15 17:40:42.159469] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.119 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2254678 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.zjhFYg5Uq0 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zjhFYg5Uq0 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zjhFYg5Uq0 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zjhFYg5Uq0 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zjhFYg5Uq0' 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2255984 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2255984 /var/tmp/bdevperf.sock 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2255984 ']' 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.377 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.377 [2024-07-15 17:40:42.464869] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:47.377 [2024-07-15 17:40:42.464971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255984 ] 00:17:47.377 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.635 [2024-07-15 17:40:42.523080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.635 [2024-07-15 17:40:42.626570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.635 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.635 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.635 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:47.893 [2024-07-15 17:40:42.968069] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.893 [2024-07-15 17:40:42.968165] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:47.893 [2024-07-15 17:40:42.968179] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.zjhFYg5Uq0 00:17:47.893 request: 00:17:47.893 { 00:17:47.893 "name": "TLSTEST", 00:17:47.893 "trtype": "tcp", 00:17:47.893 "traddr": "10.0.0.2", 00:17:47.893 "adrfam": "ipv4", 00:17:47.893 "trsvcid": "4420", 00:17:47.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.893 "prchk_reftag": false, 00:17:47.893 "prchk_guard": false, 00:17:47.893 "hdgst": false, 00:17:47.893 "ddgst": false, 00:17:47.893 "psk": "/tmp/tmp.zjhFYg5Uq0", 00:17:47.893 "method": "bdev_nvme_attach_controller", 00:17:47.893 "req_id": 1 00:17:47.893 } 00:17:47.893 Got JSON-RPC error response 00:17:47.893 response: 00:17:47.893 { 00:17:47.893 "code": -1, 00:17:47.893 "message": "Operation not permitted" 00:17:47.894 } 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2255984 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2255984 ']' 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2255984 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.894 17:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2255984 00:17:47.894 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.894 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.894 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2255984' 00:17:47.894 killing process with pid 2255984 00:17:47.894 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2255984 00:17:47.894 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.894 00:17:47.894 Latency(us) 00:17:47.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.894 =================================================================================================================== 00:17:47.894 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.894 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2255984 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2254265 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2254265 ']' 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2254265 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254265 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254265' 00:17:48.152 killing process with pid 2254265 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2254265 00:17:48.152 [2024-07-15 17:40:43.271729] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.152 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2254265 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2256125 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2256125 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2256125 ']' 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.717 17:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.717 [2024-07-15 17:40:43.607152] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:48.717 [2024-07-15 17:40:43.607260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.717 [2024-07-15 17:40:43.674818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.717 [2024-07-15 17:40:43.788441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.717 [2024-07-15 17:40:43.788515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.717 [2024-07-15 17:40:43.788531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.717 [2024-07-15 17:40:43.788545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.717 [2024-07-15 17:40:43.788565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.717 [2024-07-15 17:40:43.788597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zjhFYg5Uq0 00:17:49.650 17:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.908 [2024-07-15 17:40:44.808870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.908 17:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.166 17:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.166 [2024-07-15 17:40:45.294110] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.166 [2024-07-15 17:40:45.294381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.424 17:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.424 malloc0 00:17:50.682 17:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.939 17:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:50.939 [2024-07-15 17:40:46.043982] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:50.939 [2024-07-15 17:40:46.044026] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:50.939 [2024-07-15 17:40:46.044073] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:50.939 request: 00:17:50.939 { 00:17:50.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.939 "host": "nqn.2016-06.io.spdk:host1", 00:17:50.939 "psk": "/tmp/tmp.zjhFYg5Uq0", 00:17:50.940 "method": "nvmf_subsystem_add_host", 00:17:50.940 "req_id": 1 00:17:50.940 } 00:17:50.940 Got JSON-RPC error response 00:17:50.940 response: 00:17:50.940 { 00:17:50.940 "code": -32603, 00:17:50.940 "message": "Internal error" 00:17:50.940 } 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2256125 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2256125 ']' 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2256125 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.940 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2256125 00:17:51.198 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.198 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.198 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2256125' 00:17:51.198 killing process with pid 2256125 00:17:51.198 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2256125 00:17:51.198 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2256125 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.zjhFYg5Uq0 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2256443 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2256443 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2256443 ']' 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.456 17:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.456 [2024-07-15 17:40:46.458628] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:51.456 [2024-07-15 17:40:46.458725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.456 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.456 [2024-07-15 17:40:46.525228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.714 [2024-07-15 17:40:46.640350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.714 [2024-07-15 17:40:46.640408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.714 [2024-07-15 17:40:46.640435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.714 [2024-07-15 17:40:46.640449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.714 [2024-07-15 17:40:46.640461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.714 [2024-07-15 17:40:46.640490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zjhFYg5Uq0 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.647 [2024-07-15 17:40:47.739128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.647 17:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:52.905 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.163 [2024-07-15 17:40:48.244485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.163 [2024-07-15 17:40:48.244745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.163 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.420 malloc0 00:17:53.420 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.676 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:53.993 [2024-07-15 17:40:48.977660] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2256733 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2256733 /var/tmp/bdevperf.sock 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2256733 ']' 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.993 17:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.993 [2024-07-15 17:40:49.040154] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:53.993 [2024-07-15 17:40:49.040267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256733 ] 00:17:53.993 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.278 [2024-07-15 17:40:49.102039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.278 [2024-07-15 17:40:49.212499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.278 17:40:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.278 17:40:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.278 17:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:17:54.536 [2024-07-15 17:40:49.543972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.536 [2024-07-15 17:40:49.544091] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:54.536 TLSTESTn1 00:17:54.536 17:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:55.101 17:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:55.101 "subsystems": [ 00:17:55.101 { 00:17:55.101 "subsystem": "keyring", 00:17:55.101 "config": [] 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "subsystem": "iobuf", 00:17:55.101 "config": [ 00:17:55.101 { 00:17:55.101 "method": "iobuf_set_options", 00:17:55.101 "params": { 00:17:55.101 "small_pool_count": 8192, 00:17:55.101 "large_pool_count": 1024, 00:17:55.101 "small_bufsize": 8192, 00:17:55.101 "large_bufsize": 135168 00:17:55.101 } 00:17:55.101 } 00:17:55.101 ] 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "subsystem": "sock", 00:17:55.101 "config": [ 00:17:55.101 { 00:17:55.101 "method": "sock_set_default_impl", 00:17:55.101 "params": { 00:17:55.101 "impl_name": "posix" 00:17:55.101 } 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "method": "sock_impl_set_options", 00:17:55.101 "params": { 00:17:55.101 "impl_name": "ssl", 00:17:55.101 "recv_buf_size": 4096, 00:17:55.101 "send_buf_size": 4096, 00:17:55.101 "enable_recv_pipe": true, 00:17:55.101 "enable_quickack": false, 00:17:55.101 "enable_placement_id": 0, 00:17:55.101 "enable_zerocopy_send_server": true, 00:17:55.101 "enable_zerocopy_send_client": false, 00:17:55.101 "zerocopy_threshold": 0, 00:17:55.101 "tls_version": 0, 00:17:55.101 "enable_ktls": false 00:17:55.101 } 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "method": "sock_impl_set_options", 00:17:55.101 "params": { 00:17:55.101 "impl_name": "posix", 00:17:55.101 "recv_buf_size": 2097152, 00:17:55.101 "send_buf_size": 2097152, 00:17:55.101 "enable_recv_pipe": true, 00:17:55.101 "enable_quickack": false, 00:17:55.101 "enable_placement_id": 0, 00:17:55.101 "enable_zerocopy_send_server": true, 00:17:55.101 "enable_zerocopy_send_client": false, 00:17:55.101 "zerocopy_threshold": 0, 00:17:55.101 "tls_version": 0, 00:17:55.101 "enable_ktls": false 00:17:55.101 } 00:17:55.101 } 00:17:55.101 ] 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "subsystem": "vmd", 00:17:55.101 "config": [] 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "subsystem": "accel", 00:17:55.101 "config": [ 00:17:55.101 { 00:17:55.101 "method": "accel_set_options", 00:17:55.101 "params": { 00:17:55.101 "small_cache_size": 128, 00:17:55.101 "large_cache_size": 16, 00:17:55.101 "task_count": 2048, 00:17:55.101 "sequence_count": 2048, 00:17:55.101 "buf_count": 2048 00:17:55.101 } 00:17:55.101 } 00:17:55.101 ] 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "subsystem": "bdev", 00:17:55.101 "config": [ 00:17:55.101 { 00:17:55.101 "method": "bdev_set_options", 00:17:55.101 "params": { 00:17:55.101 "bdev_io_pool_size": 65535, 00:17:55.101 "bdev_io_cache_size": 256, 00:17:55.101 "bdev_auto_examine": true, 00:17:55.101 "iobuf_small_cache_size": 128, 00:17:55.101 "iobuf_large_cache_size": 16 00:17:55.101 } 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "method": "bdev_raid_set_options", 00:17:55.101 "params": { 00:17:55.101 "process_window_size_kb": 1024 00:17:55.101 } 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "method": "bdev_iscsi_set_options", 00:17:55.101 "params": { 00:17:55.101 "timeout_sec": 30 00:17:55.101 } 00:17:55.101 }, 00:17:55.101 { 00:17:55.101 "method": "bdev_nvme_set_options", 00:17:55.101 "params": { 00:17:55.101 "action_on_timeout": "none", 00:17:55.101 "timeout_us": 0, 00:17:55.101 "timeout_admin_us": 0, 00:17:55.101 "keep_alive_timeout_ms": 10000, 00:17:55.101 "arbitration_burst": 0, 00:17:55.101 "low_priority_weight": 0, 00:17:55.101 "medium_priority_weight": 0, 00:17:55.101 "high_priority_weight": 0, 00:17:55.101 "nvme_adminq_poll_period_us": 10000, 00:17:55.101 "nvme_ioq_poll_period_us": 0, 00:17:55.101 "io_queue_requests": 0, 00:17:55.101 "delay_cmd_submit": true, 00:17:55.101 "transport_retry_count": 4, 00:17:55.101 "bdev_retry_count": 3, 00:17:55.101 "transport_ack_timeout": 0, 00:17:55.101 "ctrlr_loss_timeout_sec": 0, 00:17:55.101 "reconnect_delay_sec": 0, 00:17:55.101 "fast_io_fail_timeout_sec": 0, 00:17:55.101 "disable_auto_failback": false, 00:17:55.101 "generate_uuids": false, 00:17:55.101 "transport_tos": 0, 00:17:55.101 "nvme_error_stat": false, 00:17:55.101 "rdma_srq_size": 0, 00:17:55.101 "io_path_stat": false, 00:17:55.101 "allow_accel_sequence": false, 00:17:55.101 "rdma_max_cq_size": 0, 00:17:55.101 "rdma_cm_event_timeout_ms": 0, 00:17:55.101 "dhchap_digests": [ 00:17:55.102 "sha256", 00:17:55.102 "sha384", 00:17:55.102 "sha512" 00:17:55.102 ], 00:17:55.102 "dhchap_dhgroups": [ 00:17:55.102 "null", 00:17:55.102 "ffdhe2048", 00:17:55.102 "ffdhe3072", 00:17:55.102 "ffdhe4096", 00:17:55.102 "ffdhe6144", 00:17:55.102 "ffdhe8192" 00:17:55.102 ] 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "bdev_nvme_set_hotplug", 00:17:55.102 "params": { 00:17:55.102 "period_us": 100000, 00:17:55.102 "enable": false 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "bdev_malloc_create", 00:17:55.102 "params": { 00:17:55.102 "name": "malloc0", 00:17:55.102 "num_blocks": 8192, 00:17:55.102 "block_size": 4096, 00:17:55.102 "physical_block_size": 4096, 00:17:55.102 "uuid": "20a5b735-c086-45e3-9dd5-f941f829132b", 00:17:55.102 "optimal_io_boundary": 0 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "bdev_wait_for_examine" 00:17:55.102 } 00:17:55.102 ] 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "subsystem": "nbd", 00:17:55.102 "config": [] 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "subsystem": "scheduler", 00:17:55.102 "config": [ 00:17:55.102 { 00:17:55.102 "method": "framework_set_scheduler", 00:17:55.102 "params": { 00:17:55.102 "name": "static" 00:17:55.102 } 00:17:55.102 } 00:17:55.102 ] 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "subsystem": "nvmf", 00:17:55.102 "config": [ 00:17:55.102 { 00:17:55.102 "method": "nvmf_set_config", 00:17:55.102 "params": { 00:17:55.102 "discovery_filter": "match_any", 00:17:55.102 "admin_cmd_passthru": { 00:17:55.102 "identify_ctrlr": false 00:17:55.102 } 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_set_max_subsystems", 00:17:55.102 "params": { 00:17:55.102 "max_subsystems": 1024 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_set_crdt", 00:17:55.102 "params": { 00:17:55.102 "crdt1": 0, 00:17:55.102 "crdt2": 0, 00:17:55.102 "crdt3": 0 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_create_transport", 00:17:55.102 "params": { 00:17:55.102 "trtype": "TCP", 00:17:55.102 "max_queue_depth": 128, 00:17:55.102 "max_io_qpairs_per_ctrlr": 127, 00:17:55.102 "in_capsule_data_size": 4096, 00:17:55.102 "max_io_size": 131072, 00:17:55.102 "io_unit_size": 131072, 00:17:55.102 "max_aq_depth": 128, 00:17:55.102 "num_shared_buffers": 511, 00:17:55.102 "buf_cache_size": 4294967295, 00:17:55.102 "dif_insert_or_strip": false, 00:17:55.102 "zcopy": false, 00:17:55.102 "c2h_success": false, 00:17:55.102 "sock_priority": 0, 00:17:55.102 "abort_timeout_sec": 1, 00:17:55.102 "ack_timeout": 0, 00:17:55.102 "data_wr_pool_size": 0 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_create_subsystem", 00:17:55.102 "params": { 00:17:55.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.102 "allow_any_host": false, 00:17:55.102 "serial_number": "SPDK00000000000001", 00:17:55.102 "model_number": "SPDK bdev Controller", 00:17:55.102 "max_namespaces": 10, 00:17:55.102 "min_cntlid": 1, 00:17:55.102 "max_cntlid": 65519, 00:17:55.102 "ana_reporting": false 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_subsystem_add_host", 00:17:55.102 "params": { 00:17:55.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.102 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.102 "psk": "/tmp/tmp.zjhFYg5Uq0" 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_subsystem_add_ns", 00:17:55.102 "params": { 00:17:55.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.102 "namespace": { 00:17:55.102 "nsid": 1, 00:17:55.102 "bdev_name": "malloc0", 00:17:55.102 "nguid": "20A5B735C08645E39DD5F941F829132B", 00:17:55.102 "uuid": "20a5b735-c086-45e3-9dd5-f941f829132b", 00:17:55.102 "no_auto_visible": false 00:17:55.102 } 00:17:55.102 } 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "method": "nvmf_subsystem_add_listener", 00:17:55.102 "params": { 00:17:55.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.102 "listen_address": { 00:17:55.102 "trtype": "TCP", 00:17:55.102 "adrfam": "IPv4", 00:17:55.102 "traddr": "10.0.0.2", 00:17:55.102 "trsvcid": "4420" 00:17:55.102 }, 00:17:55.102 "secure_channel": true 00:17:55.102 } 00:17:55.102 } 00:17:55.102 ] 00:17:55.102 } 00:17:55.102 ] 00:17:55.102 }' 00:17:55.102 17:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.361 17:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:55.361 "subsystems": [ 00:17:55.361 { 00:17:55.361 "subsystem": "keyring", 00:17:55.361 "config": [] 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "subsystem": "iobuf", 00:17:55.361 "config": [ 00:17:55.361 { 00:17:55.361 "method": "iobuf_set_options", 00:17:55.361 "params": { 00:17:55.361 "small_pool_count": 8192, 00:17:55.361 "large_pool_count": 1024, 00:17:55.361 "small_bufsize": 8192, 00:17:55.361 "large_bufsize": 135168 00:17:55.361 } 00:17:55.361 } 00:17:55.361 ] 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "subsystem": "sock", 00:17:55.361 "config": [ 00:17:55.361 { 00:17:55.361 "method": "sock_set_default_impl", 00:17:55.361 "params": { 00:17:55.361 "impl_name": "posix" 00:17:55.361 } 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "method": "sock_impl_set_options", 00:17:55.361 "params": { 00:17:55.361 "impl_name": "ssl", 00:17:55.361 "recv_buf_size": 4096, 00:17:55.361 "send_buf_size": 4096, 00:17:55.361 "enable_recv_pipe": true, 00:17:55.361 "enable_quickack": false, 00:17:55.361 "enable_placement_id": 0, 00:17:55.361 "enable_zerocopy_send_server": true, 00:17:55.361 "enable_zerocopy_send_client": false, 00:17:55.361 "zerocopy_threshold": 0, 00:17:55.361 "tls_version": 0, 00:17:55.361 "enable_ktls": false 00:17:55.361 } 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "method": "sock_impl_set_options", 00:17:55.361 "params": { 00:17:55.361 "impl_name": "posix", 00:17:55.361 "recv_buf_size": 2097152, 00:17:55.361 "send_buf_size": 2097152, 00:17:55.361 "enable_recv_pipe": true, 00:17:55.361 "enable_quickack": false, 00:17:55.361 "enable_placement_id": 0, 00:17:55.361 "enable_zerocopy_send_server": true, 00:17:55.361 "enable_zerocopy_send_client": false, 00:17:55.361 "zerocopy_threshold": 0, 00:17:55.361 "tls_version": 0, 00:17:55.361 "enable_ktls": false 00:17:55.361 } 00:17:55.361 } 00:17:55.361 ] 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "subsystem": "vmd", 00:17:55.361 "config": [] 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "subsystem": "accel", 00:17:55.361 "config": [ 00:17:55.361 { 00:17:55.361 "method": "accel_set_options", 00:17:55.361 "params": { 00:17:55.361 "small_cache_size": 128, 00:17:55.361 "large_cache_size": 16, 00:17:55.361 "task_count": 2048, 00:17:55.361 "sequence_count": 2048, 00:17:55.361 "buf_count": 2048 00:17:55.361 } 00:17:55.361 } 00:17:55.361 ] 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "subsystem": "bdev", 00:17:55.361 "config": [ 00:17:55.361 { 00:17:55.361 "method": "bdev_set_options", 00:17:55.361 "params": { 00:17:55.361 "bdev_io_pool_size": 65535, 00:17:55.361 "bdev_io_cache_size": 256, 00:17:55.361 "bdev_auto_examine": true, 00:17:55.361 "iobuf_small_cache_size": 128, 00:17:55.361 "iobuf_large_cache_size": 16 00:17:55.361 } 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "method": "bdev_raid_set_options", 00:17:55.361 "params": { 00:17:55.361 "process_window_size_kb": 1024 00:17:55.361 } 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "method": "bdev_iscsi_set_options", 00:17:55.361 "params": { 00:17:55.361 "timeout_sec": 30 00:17:55.361 } 00:17:55.361 }, 00:17:55.361 { 00:17:55.361 "method": "bdev_nvme_set_options", 00:17:55.361 "params": { 00:17:55.361 "action_on_timeout": "none", 00:17:55.361 "timeout_us": 0, 00:17:55.361 "timeout_admin_us": 0, 00:17:55.361 "keep_alive_timeout_ms": 10000, 00:17:55.361 "arbitration_burst": 0, 00:17:55.361 "low_priority_weight": 0, 00:17:55.361 "medium_priority_weight": 0, 00:17:55.361 "high_priority_weight": 0, 00:17:55.361 "nvme_adminq_poll_period_us": 10000, 00:17:55.361 "nvme_ioq_poll_period_us": 0, 00:17:55.361 "io_queue_requests": 512, 00:17:55.361 "delay_cmd_submit": true, 00:17:55.361 "transport_retry_count": 4, 00:17:55.361 "bdev_retry_count": 3, 00:17:55.361 "transport_ack_timeout": 0, 00:17:55.361 "ctrlr_loss_timeout_sec": 0, 00:17:55.361 "reconnect_delay_sec": 0, 00:17:55.361 "fast_io_fail_timeout_sec": 0, 00:17:55.361 "disable_auto_failback": false, 00:17:55.361 "generate_uuids": false, 00:17:55.361 "transport_tos": 0, 00:17:55.361 "nvme_error_stat": false, 00:17:55.361 "rdma_srq_size": 0, 00:17:55.361 "io_path_stat": false, 00:17:55.361 "allow_accel_sequence": false, 00:17:55.361 "rdma_max_cq_size": 0, 00:17:55.361 "rdma_cm_event_timeout_ms": 0, 00:17:55.361 "dhchap_digests": [ 00:17:55.361 "sha256", 00:17:55.361 "sha384", 00:17:55.361 "sha512" 00:17:55.361 ], 00:17:55.361 "dhchap_dhgroups": [ 00:17:55.361 "null", 00:17:55.361 "ffdhe2048", 00:17:55.361 "ffdhe3072", 00:17:55.362 "ffdhe4096", 00:17:55.362 "ffdhe6144", 00:17:55.362 "ffdhe8192" 00:17:55.362 ] 00:17:55.362 } 00:17:55.362 }, 00:17:55.362 { 00:17:55.362 "method": "bdev_nvme_attach_controller", 00:17:55.362 "params": { 00:17:55.362 "name": "TLSTEST", 00:17:55.362 "trtype": "TCP", 00:17:55.362 "adrfam": "IPv4", 00:17:55.362 "traddr": "10.0.0.2", 00:17:55.362 "trsvcid": "4420", 00:17:55.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.362 "prchk_reftag": false, 00:17:55.362 "prchk_guard": false, 00:17:55.362 "ctrlr_loss_timeout_sec": 0, 00:17:55.362 "reconnect_delay_sec": 0, 00:17:55.362 "fast_io_fail_timeout_sec": 0, 00:17:55.362 "psk": "/tmp/tmp.zjhFYg5Uq0", 00:17:55.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.362 "hdgst": false, 00:17:55.362 "ddgst": false 00:17:55.362 } 00:17:55.362 }, 00:17:55.362 { 00:17:55.362 "method": "bdev_nvme_set_hotplug", 00:17:55.362 "params": { 00:17:55.362 "period_us": 100000, 00:17:55.362 "enable": false 00:17:55.362 } 00:17:55.362 }, 00:17:55.362 { 00:17:55.362 "method": "bdev_wait_for_examine" 00:17:55.362 } 00:17:55.362 ] 00:17:55.362 }, 00:17:55.362 { 00:17:55.362 "subsystem": "nbd", 00:17:55.362 "config": [] 00:17:55.362 } 00:17:55.362 ] 00:17:55.362 }' 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2256733 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2256733 ']' 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2256733 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2256733 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2256733' 00:17:55.362 killing process with pid 2256733 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2256733 00:17:55.362 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.362 00:17:55.362 Latency(us) 00:17:55.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.362 =================================================================================================================== 00:17:55.362 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.362 [2024-07-15 17:40:50.286445] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.362 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2256733 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2256443 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2256443 ']' 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2256443 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2256443 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2256443' 00:17:55.620 killing process with pid 2256443 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2256443 00:17:55.620 [2024-07-15 17:40:50.579709] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.620 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2256443 00:17:55.878 17:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:55.878 17:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.878 17:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:55.878 "subsystems": [ 00:17:55.878 { 00:17:55.878 "subsystem": "keyring", 00:17:55.878 "config": [] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "iobuf", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "iobuf_set_options", 00:17:55.878 "params": { 00:17:55.878 "small_pool_count": 8192, 00:17:55.878 "large_pool_count": 1024, 00:17:55.878 "small_bufsize": 8192, 00:17:55.878 "large_bufsize": 135168 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "sock", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "sock_set_default_impl", 00:17:55.878 "params": { 00:17:55.878 "impl_name": "posix" 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "sock_impl_set_options", 00:17:55.878 "params": { 00:17:55.878 "impl_name": "ssl", 00:17:55.878 "recv_buf_size": 4096, 00:17:55.878 "send_buf_size": 4096, 00:17:55.878 "enable_recv_pipe": true, 00:17:55.878 "enable_quickack": false, 00:17:55.878 "enable_placement_id": 0, 00:17:55.878 "enable_zerocopy_send_server": true, 00:17:55.878 "enable_zerocopy_send_client": false, 00:17:55.878 "zerocopy_threshold": 0, 00:17:55.878 "tls_version": 0, 00:17:55.878 "enable_ktls": false 00:17:55.878 } 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "method": "sock_impl_set_options", 00:17:55.878 "params": { 00:17:55.878 "impl_name": "posix", 00:17:55.878 "recv_buf_size": 2097152, 00:17:55.878 "send_buf_size": 2097152, 00:17:55.878 "enable_recv_pipe": true, 00:17:55.878 "enable_quickack": false, 00:17:55.878 "enable_placement_id": 0, 00:17:55.878 "enable_zerocopy_send_server": true, 00:17:55.878 "enable_zerocopy_send_client": false, 00:17:55.878 "zerocopy_threshold": 0, 00:17:55.878 "tls_version": 0, 00:17:55.878 "enable_ktls": false 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "vmd", 00:17:55.878 "config": [] 00:17:55.878 }, 00:17:55.878 { 00:17:55.878 "subsystem": "accel", 00:17:55.878 "config": [ 00:17:55.878 { 00:17:55.878 "method": "accel_set_options", 00:17:55.878 "params": { 00:17:55.878 "small_cache_size": 128, 00:17:55.878 "large_cache_size": 16, 00:17:55.878 "task_count": 2048, 00:17:55.878 "sequence_count": 2048, 00:17:55.878 "buf_count": 2048 00:17:55.878 } 00:17:55.878 } 00:17:55.878 ] 00:17:55.878 }, 00:17:55.878 { 00:17:55.879 "subsystem": "bdev", 00:17:55.879 "config": [ 00:17:55.879 { 00:17:55.879 "method": "bdev_set_options", 00:17:55.879 "params": { 00:17:55.879 "bdev_io_pool_size": 65535, 00:17:55.879 "bdev_io_cache_size": 256, 00:17:55.879 "bdev_auto_examine": true, 00:17:55.879 "iobuf_small_cache_size": 128, 00:17:55.879 "iobuf_large_cache_size": 16 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_raid_set_options", 00:17:55.879 "params": { 00:17:55.879 "process_window_size_kb": 1024 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_iscsi_set_options", 00:17:55.879 "params": { 00:17:55.879 "timeout_sec": 30 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_nvme_set_options", 00:17:55.879 "params": { 00:17:55.879 "action_on_timeout": "none", 00:17:55.879 "timeout_us": 0, 00:17:55.879 "timeout_admin_us": 0, 00:17:55.879 "keep_alive_timeout_ms": 10000, 00:17:55.879 "arbitration_burst": 0, 00:17:55.879 "low_priority_weight": 0, 00:17:55.879 "medium_priority_weight": 0, 00:17:55.879 "high_priority_weight": 0, 00:17:55.879 "nvme_adminq_poll_period_us": 10000, 00:17:55.879 "nvme_ioq_poll_period_us": 0, 00:17:55.879 "io_queue_requests": 0, 00:17:55.879 "delay_cmd_submit": true, 00:17:55.879 "transport_retry_count": 4, 00:17:55.879 "bdev_retry_count": 3, 00:17:55.879 "transport_ack_timeout": 0, 00:17:55.879 "ctrlr_loss_timeout_sec": 0, 00:17:55.879 "reconnect_delay_sec": 0, 00:17:55.879 "fast_io_fail_timeout_sec": 0, 00:17:55.879 "disable_auto_failback": false, 00:17:55.879 "generate_uuids": false, 00:17:55.879 "transport_tos": 0, 00:17:55.879 "nvme_error_stat": false, 00:17:55.879 "rdma_srq_size": 0, 00:17:55.879 "io_path_stat": false, 00:17:55.879 "allow_accel_sequence": false, 00:17:55.879 "rdma_max_cq_size": 0, 00:17:55.879 "rdma_cm_event_timeout_ms": 0, 00:17:55.879 "dhchap_digests": [ 00:17:55.879 "sha256", 00:17:55.879 "sha384", 00:17:55.879 "sha512" 00:17:55.879 ], 00:17:55.879 "dhchap_dhgroups": [ 00:17:55.879 "null", 00:17:55.879 "ffdhe2048", 00:17:55.879 "ffdhe3072", 00:17:55.879 "ffdhe4096", 00:17:55.879 "ffdhe6144", 00:17:55.879 "ffdhe8192" 00:17:55.879 ] 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_nvme_set_hotplug", 00:17:55.879 "params": { 00:17:55.879 "period_us": 100000, 00:17:55.879 "enable": false 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_malloc_create", 00:17:55.879 "params": { 00:17:55.879 "name": "malloc0", 00:17:55.879 "num_blocks": 8192, 00:17:55.879 "block_size": 4096, 00:17:55.879 "physical_block_size": 4096, 00:17:55.879 "uuid": "20a5b735-c086-45e3-9dd5-f941f829132b", 00:17:55.879 "optimal_io_boundary": 0 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "bdev_wait_for_examine" 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "subsystem": "nbd", 00:17:55.879 "config": [] 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "subsystem": "scheduler", 00:17:55.879 "config": [ 00:17:55.879 { 00:17:55.879 "method": "framework_set_scheduler", 00:17:55.879 "params": { 00:17:55.879 "name": "static" 00:17:55.879 } 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "subsystem": "nvmf", 00:17:55.879 "config": [ 00:17:55.879 { 00:17:55.879 "method": "nvmf_set_config", 00:17:55.879 "params": { 00:17:55.879 "discovery_filter": "match_any", 00:17:55.879 "admin_cmd_passthru": { 00:17:55.879 "identify_ctrlr": false 00:17:55.879 } 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_set_max_subsystems", 00:17:55.879 "params": { 00:17:55.879 "max_subsystems": 1024 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_set_crdt", 00:17:55.879 "params": { 00:17:55.879 "crdt1": 0, 00:17:55.879 "crdt2": 0, 00:17:55.879 "crdt3": 0 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_create_transport", 00:17:55.879 "params": { 00:17:55.879 "trtype": "TCP", 00:17:55.879 "max_queue_depth": 128, 00:17:55.879 "max_io_qpairs_per_ctrlr": 127, 00:17:55.879 "in_capsule_data_size": 4096, 00:17:55.879 "max_io_size": 131072, 00:17:55.879 "io_unit_size": 131072, 00:17:55.879 "max_aq_depth": 128, 00:17:55.879 "num_shared_buffers": 511, 00:17:55.879 "buf_cache_size": 4294967295, 00:17:55.879 "dif_insert_or_strip": false, 00:17:55.879 "zcopy": false, 00:17:55.879 "c2h_success": false, 00:17:55.879 "sock_priority": 0, 00:17:55.879 "abort_timeout_sec": 1, 00:17:55.879 "ack_timeout": 0, 00:17:55.879 "data_wr_pool_size": 0 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_create_subsystem", 00:17:55.879 "params": { 00:17:55.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.879 "allow_any_host": false, 00:17:55.879 "serial_number": "SPDK00000000000001", 00:17:55.879 "model_number": "SPDK bdev Controller", 00:17:55.879 "max_namespaces": 10, 00:17:55.879 "min_cntlid": 1, 00:17:55.879 "max_cntlid": 65519, 00:17:55.879 "ana_reporting": false 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_subsystem_add_host", 00:17:55.879 "params": { 00:17:55.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.879 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.879 "psk": "/tmp/tmp.zjhFYg5Uq0" 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_subsystem_add_ns", 00:17:55.879 "params": { 00:17:55.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.879 "namespace": { 00:17:55.879 "nsid": 1, 00:17:55.879 "bdev_name": "malloc0", 00:17:55.879 "nguid": "20A5B735C08645E39DD5F941F829132B", 00:17:55.879 "uuid": "20a5b735-c086-45e3-9dd5-f941f829132b", 00:17:55.879 "no_auto_visible": false 00:17:55.879 } 00:17:55.879 } 00:17:55.879 }, 00:17:55.879 { 00:17:55.879 "method": "nvmf_subsystem_add_listener", 00:17:55.879 "params": { 00:17:55.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.879 "listen_address": { 00:17:55.879 "trtype": "TCP", 00:17:55.879 "adrfam": "IPv4", 00:17:55.879 "traddr": "10.0.0.2", 00:17:55.879 "trsvcid": "4420" 00:17:55.879 }, 00:17:55.879 "secure_channel": true 00:17:55.879 } 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 } 00:17:55.879 ] 00:17:55.879 }' 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2257010 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2257010 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2257010 ']' 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.879 17:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.879 [2024-07-15 17:40:50.939210] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:55.879 [2024-07-15 17:40:50.939321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.879 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.879 [2024-07-15 17:40:51.008101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.137 [2024-07-15 17:40:51.121927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.137 [2024-07-15 17:40:51.121973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.137 [2024-07-15 17:40:51.121995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.137 [2024-07-15 17:40:51.122006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.137 [2024-07-15 17:40:51.122015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.137 [2024-07-15 17:40:51.122085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.395 [2024-07-15 17:40:51.350263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.395 [2024-07-15 17:40:51.366223] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:56.395 [2024-07-15 17:40:51.382278] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.395 [2024-07-15 17:40:51.393043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2257163 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2257163 /var/tmp/bdevperf.sock 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2257163 ']' 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:56.961 "subsystems": [ 00:17:56.961 { 00:17:56.961 "subsystem": "keyring", 00:17:56.961 "config": [] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "iobuf", 00:17:56.961 "config": [ 00:17:56.961 { 00:17:56.961 "method": "iobuf_set_options", 00:17:56.961 "params": { 00:17:56.961 "small_pool_count": 8192, 00:17:56.961 "large_pool_count": 1024, 00:17:56.961 "small_bufsize": 8192, 00:17:56.961 "large_bufsize": 135168 00:17:56.961 } 00:17:56.961 } 00:17:56.961 ] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "sock", 00:17:56.961 "config": [ 00:17:56.961 { 00:17:56.961 "method": "sock_set_default_impl", 00:17:56.961 "params": { 00:17:56.961 "impl_name": "posix" 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "sock_impl_set_options", 00:17:56.961 "params": { 00:17:56.961 "impl_name": "ssl", 00:17:56.961 "recv_buf_size": 4096, 00:17:56.961 "send_buf_size": 4096, 00:17:56.961 "enable_recv_pipe": true, 00:17:56.961 "enable_quickack": false, 00:17:56.961 "enable_placement_id": 0, 00:17:56.961 "enable_zerocopy_send_server": true, 00:17:56.961 "enable_zerocopy_send_client": false, 00:17:56.961 "zerocopy_threshold": 0, 00:17:56.961 "tls_version": 0, 00:17:56.961 "enable_ktls": false 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "sock_impl_set_options", 00:17:56.961 "params": { 00:17:56.961 "impl_name": "posix", 00:17:56.961 "recv_buf_size": 2097152, 00:17:56.961 "send_buf_size": 2097152, 00:17:56.961 "enable_recv_pipe": true, 00:17:56.961 "enable_quickack": false, 00:17:56.961 "enable_placement_id": 0, 00:17:56.961 "enable_zerocopy_send_server": true, 00:17:56.961 "enable_zerocopy_send_client": false, 00:17:56.961 "zerocopy_threshold": 0, 00:17:56.961 "tls_version": 0, 00:17:56.961 "enable_ktls": false 00:17:56.961 } 00:17:56.961 } 00:17:56.961 ] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "vmd", 00:17:56.961 "config": [] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "accel", 00:17:56.961 "config": [ 00:17:56.961 { 00:17:56.961 "method": "accel_set_options", 00:17:56.961 "params": { 00:17:56.961 "small_cache_size": 128, 00:17:56.961 "large_cache_size": 16, 00:17:56.961 "task_count": 2048, 00:17:56.961 "sequence_count": 2048, 00:17:56.961 "buf_count": 2048 00:17:56.961 } 00:17:56.961 } 00:17:56.961 ] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "bdev", 00:17:56.961 "config": [ 00:17:56.961 { 00:17:56.961 "method": "bdev_set_options", 00:17:56.961 "params": { 00:17:56.961 "bdev_io_pool_size": 65535, 00:17:56.961 "bdev_io_cache_size": 256, 00:17:56.961 "bdev_auto_examine": true, 00:17:56.961 "iobuf_small_cache_size": 128, 00:17:56.961 "iobuf_large_cache_size": 16 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_raid_set_options", 00:17:56.961 "params": { 00:17:56.961 "process_window_size_kb": 1024 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_iscsi_set_options", 00:17:56.961 "params": { 00:17:56.961 "timeout_sec": 30 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_nvme_set_options", 00:17:56.961 "params": { 00:17:56.961 "action_on_timeout": "none", 00:17:56.961 "timeout_us": 0, 00:17:56.961 "timeout_admin_us": 0, 00:17:56.961 "keep_alive_timeout_ms": 10000, 00:17:56.961 "arbitration_burst": 0, 00:17:56.961 "low_priority_weight": 0, 00:17:56.961 "medium_priority_weight": 0, 00:17:56.961 "high_priority_weight": 0, 00:17:56.961 "nvme_adminq_poll_period_us": 10000, 00:17:56.961 "nvme_ioq_poll_period_us": 0, 00:17:56.961 "io_queue_requests": 512, 00:17:56.961 "delay_cmd_submit": true, 00:17:56.961 "transport_retry_count": 4, 00:17:56.961 "bdev_retry_count": 3, 00:17:56.961 "transport_ack_timeout": 0, 00:17:56.961 "ctrlr_loss_timeout_sec": 0, 00:17:56.961 "reconnect_delay_sec": 0, 00:17:56.961 "fast_io_fail_timeout_sec": 0, 00:17:56.961 "disable_auto_failback": false, 00:17:56.961 "generate_uuids": false, 00:17:56.961 "transport_tos": 0, 00:17:56.961 "nvme_error_stat": false, 00:17:56.961 "rdma_srq_size": 0, 00:17:56.961 "io_path_stat": false, 00:17:56.961 "allow_accel_sequence": false, 00:17:56.961 "rdma_max_cq_size": 0, 00:17:56.961 "rdma_cm_event_timeout_ms": 0, 00:17:56.961 "dhchap_digests": [ 00:17:56.961 "sha256", 00:17:56.961 "sha384", 00:17:56.961 "sha512" 00:17:56.961 ], 00:17:56.961 "dhchap_dhgroups": [ 00:17:56.961 "null", 00:17:56.961 "ffdhe2048", 00:17:56.961 "ffdhe3072", 00:17:56.961 "ffdhe4096", 00:17:56.961 "ffdhe6144", 00:17:56.961 "ffdhe8192" 00:17:56.961 ] 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_nvme_attach_controller", 00:17:56.961 "params": { 00:17:56.961 "name": "TLSTEST", 00:17:56.961 "trtype": "TCP", 00:17:56.961 "adrfam": "IPv4", 00:17:56.961 "traddr": "10.0.0.2", 00:17:56.961 "trsvcid": "4420", 00:17:56.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.961 "prchk_reftag": false, 00:17:56.961 "prchk_guard": false, 00:17:56.961 "ctrlr_loss_timeout_sec": 0, 00:17:56.961 "reconnect_delay_sec": 0, 00:17:56.961 "fast_io_fail_timeout_sec": 0, 00:17:56.961 "psk": "/tmp/tmp.zjhFYg5Uq0", 00:17:56.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.961 "hdgst": false, 00:17:56.961 "ddgst": false 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_nvme_set_hotplug", 00:17:56.961 "params": { 00:17:56.961 "period_us": 100000, 00:17:56.961 "enable": false 00:17:56.961 } 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "method": "bdev_wait_for_examine" 00:17:56.961 } 00:17:56.961 ] 00:17:56.961 }, 00:17:56.961 { 00:17:56.961 "subsystem": "nbd", 00:17:56.961 "config": [] 00:17:56.961 } 00:17:56.961 ] 00:17:56.961 }' 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.961 17:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.961 [2024-07-15 17:40:51.939046] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:17:56.961 [2024-07-15 17:40:51.939128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257163 ] 00:17:56.961 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.961 [2024-07-15 17:40:51.997186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.218 [2024-07-15 17:40:52.111205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.218 [2024-07-15 17:40:52.280829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.218 [2024-07-15 17:40:52.280979] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.153 17:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.153 17:40:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:58.153 17:40:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.153 Running I/O for 10 seconds... 00:18:08.122 00:18:08.122 Latency(us) 00:18:08.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.122 Verification LBA range: start 0x0 length 0x2000 00:18:08.122 TLSTESTn1 : 10.06 2418.31 9.45 0.00 0.00 52775.16 7087.60 83886.08 00:18:08.122 =================================================================================================================== 00:18:08.122 Total : 2418.31 9.45 0.00 0.00 52775.16 7087.60 83886.08 00:18:08.122 0 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2257163 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2257163 ']' 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2257163 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2257163 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2257163' 00:18:08.122 killing process with pid 2257163 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2257163 00:18:08.122 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.122 00:18:08.122 Latency(us) 00:18:08.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.122 =================================================================================================================== 00:18:08.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.122 [2024-07-15 17:41:03.152433] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.122 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2257163 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2257010 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2257010 ']' 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2257010 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2257010 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2257010' 00:18:08.379 killing process with pid 2257010 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2257010 00:18:08.379 [2024-07-15 17:41:03.417705] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:08.379 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2257010 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2258496 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2258496 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2258496 ']' 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.636 17:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.636 [2024-07-15 17:41:03.746728] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:08.636 [2024-07-15 17:41:03.746830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.894 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.894 [2024-07-15 17:41:03.815150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.894 [2024-07-15 17:41:03.928612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.894 [2024-07-15 17:41:03.928691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.894 [2024-07-15 17:41:03.928707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.894 [2024-07-15 17:41:03.928731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.894 [2024-07-15 17:41:03.928742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.894 [2024-07-15 17:41:03.928771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.zjhFYg5Uq0 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zjhFYg5Uq0 00:18:09.151 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:09.408 [2024-07-15 17:41:04.296112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.408 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:09.666 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:09.924 [2024-07-15 17:41:04.849562] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.924 [2024-07-15 17:41:04.849796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.924 17:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.182 malloc0 00:18:10.182 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.440 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zjhFYg5Uq0 00:18:10.698 [2024-07-15 17:41:05.662175] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2258781 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2258781 /var/tmp/bdevperf.sock 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2258781 ']' 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.698 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.699 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.699 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.699 17:41:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.699 [2024-07-15 17:41:05.727127] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:10.699 [2024-07-15 17:41:05.727212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258781 ] 00:18:10.699 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.699 [2024-07-15 17:41:05.788939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.956 [2024-07-15 17:41:05.904531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.892 17:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.892 17:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.892 17:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zjhFYg5Uq0 00:18:11.892 17:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:12.151 [2024-07-15 17:41:07.277415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.409 nvme0n1 00:18:12.409 17:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.409 Running I/O for 1 seconds... 00:18:13.806 00:18:13.806 Latency(us) 00:18:13.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.806 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.806 Verification LBA range: start 0x0 length 0x2000 00:18:13.806 nvme0n1 : 1.05 2181.36 8.52 0.00 0.00 57379.15 6262.33 83886.08 00:18:13.806 =================================================================================================================== 00:18:13.806 Total : 2181.36 8.52 0.00 0.00 57379.15 6262.33 83886.08 00:18:13.806 0 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2258781 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2258781 ']' 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2258781 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258781 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258781' 00:18:13.806 killing process with pid 2258781 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2258781 00:18:13.806 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.806 00:18:13.806 Latency(us) 00:18:13.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.806 =================================================================================================================== 00:18:13.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2258781 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2258496 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2258496 ']' 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2258496 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258496 00:18:13.806 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.807 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.807 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258496' 00:18:13.807 killing process with pid 2258496 00:18:13.807 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2258496 00:18:13.807 [2024-07-15 17:41:08.866490] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:13.807 17:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2258496 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2259189 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2259189 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2259189 ']' 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.086 17:41:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.086 [2024-07-15 17:41:09.221590] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:14.086 [2024-07-15 17:41:09.221701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.345 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.345 [2024-07-15 17:41:09.289756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.345 [2024-07-15 17:41:09.401885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.345 [2024-07-15 17:41:09.401952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.345 [2024-07-15 17:41:09.401977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.345 [2024-07-15 17:41:09.401990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.346 [2024-07-15 17:41:09.402001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.346 [2024-07-15 17:41:09.402030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.282 [2024-07-15 17:41:10.180971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.282 malloc0 00:18:15.282 [2024-07-15 17:41:10.212691] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.282 [2024-07-15 17:41:10.212980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2259342 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2259342 /var/tmp/bdevperf.sock 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2259342 ']' 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.282 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.282 [2024-07-15 17:41:10.283252] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:15.282 [2024-07-15 17:41:10.283316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259342 ] 00:18:15.282 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.282 [2024-07-15 17:41:10.344000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.540 [2024-07-15 17:41:10.460113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.540 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.541 17:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.541 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zjhFYg5Uq0 00:18:15.798 17:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.055 [2024-07-15 17:41:11.066778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.055 nvme0n1 00:18:16.055 17:41:11 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:16.314 Running I/O for 1 seconds... 00:18:17.249 00:18:17.249 Latency(us) 00:18:17.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.249 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:17.249 Verification LBA range: start 0x0 length 0x2000 00:18:17.249 nvme0n1 : 1.06 1515.00 5.92 0.00 0.00 82335.21 6602.15 117285.17 00:18:17.249 =================================================================================================================== 00:18:17.249 Total : 1515.00 5.92 0.00 0.00 82335.21 6602.15 117285.17 00:18:17.249 0 00:18:17.249 17:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:17.249 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.249 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.507 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.507 17:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:17.507 "subsystems": [ 00:18:17.507 { 00:18:17.507 "subsystem": "keyring", 00:18:17.507 "config": [ 00:18:17.507 { 00:18:17.507 "method": "keyring_file_add_key", 00:18:17.507 "params": { 00:18:17.507 "name": "key0", 00:18:17.507 "path": "/tmp/tmp.zjhFYg5Uq0" 00:18:17.507 } 00:18:17.507 } 00:18:17.507 ] 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "subsystem": "iobuf", 00:18:17.507 "config": [ 00:18:17.507 { 00:18:17.507 "method": "iobuf_set_options", 00:18:17.507 "params": { 00:18:17.507 "small_pool_count": 8192, 00:18:17.507 "large_pool_count": 1024, 00:18:17.507 "small_bufsize": 8192, 00:18:17.507 "large_bufsize": 135168 00:18:17.507 } 00:18:17.507 } 00:18:17.507 ] 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "subsystem": "sock", 00:18:17.507 "config": [ 00:18:17.507 { 00:18:17.507 "method": "sock_set_default_impl", 00:18:17.507 "params": { 00:18:17.507 "impl_name": "posix" 00:18:17.507 } 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "method": "sock_impl_set_options", 00:18:17.507 "params": { 00:18:17.507 "impl_name": "ssl", 00:18:17.507 "recv_buf_size": 4096, 00:18:17.507 "send_buf_size": 4096, 00:18:17.507 "enable_recv_pipe": true, 00:18:17.507 "enable_quickack": false, 00:18:17.507 "enable_placement_id": 0, 00:18:17.507 "enable_zerocopy_send_server": true, 00:18:17.507 "enable_zerocopy_send_client": false, 00:18:17.507 "zerocopy_threshold": 0, 00:18:17.507 "tls_version": 0, 00:18:17.507 "enable_ktls": false 00:18:17.507 } 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "method": "sock_impl_set_options", 00:18:17.507 "params": { 00:18:17.507 "impl_name": "posix", 00:18:17.507 "recv_buf_size": 2097152, 00:18:17.507 "send_buf_size": 2097152, 00:18:17.507 "enable_recv_pipe": true, 00:18:17.507 "enable_quickack": false, 00:18:17.507 "enable_placement_id": 0, 00:18:17.507 "enable_zerocopy_send_server": true, 00:18:17.507 "enable_zerocopy_send_client": false, 00:18:17.507 "zerocopy_threshold": 0, 00:18:17.507 "tls_version": 0, 00:18:17.507 "enable_ktls": false 00:18:17.507 } 00:18:17.507 } 00:18:17.507 ] 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "subsystem": "vmd", 00:18:17.507 "config": [] 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "subsystem": "accel", 00:18:17.507 "config": [ 00:18:17.507 { 00:18:17.507 "method": "accel_set_options", 00:18:17.507 "params": { 00:18:17.507 "small_cache_size": 128, 00:18:17.507 "large_cache_size": 16, 00:18:17.507 "task_count": 2048, 00:18:17.507 "sequence_count": 2048, 00:18:17.507 "buf_count": 2048 00:18:17.507 } 00:18:17.507 } 00:18:17.507 ] 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "subsystem": "bdev", 00:18:17.507 "config": [ 00:18:17.507 { 00:18:17.507 "method": "bdev_set_options", 00:18:17.507 "params": { 00:18:17.507 "bdev_io_pool_size": 65535, 00:18:17.507 "bdev_io_cache_size": 256, 00:18:17.507 "bdev_auto_examine": true, 00:18:17.507 "iobuf_small_cache_size": 128, 00:18:17.507 "iobuf_large_cache_size": 16 00:18:17.507 } 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "method": "bdev_raid_set_options", 00:18:17.507 "params": { 00:18:17.507 "process_window_size_kb": 1024 00:18:17.507 } 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "method": "bdev_iscsi_set_options", 00:18:17.507 "params": { 00:18:17.507 "timeout_sec": 30 00:18:17.507 } 00:18:17.507 }, 00:18:17.507 { 00:18:17.507 "method": "bdev_nvme_set_options", 00:18:17.507 "params": { 00:18:17.507 "action_on_timeout": "none", 00:18:17.507 "timeout_us": 0, 00:18:17.507 "timeout_admin_us": 0, 00:18:17.507 "keep_alive_timeout_ms": 10000, 00:18:17.507 "arbitration_burst": 0, 00:18:17.507 "low_priority_weight": 0, 00:18:17.507 "medium_priority_weight": 0, 00:18:17.507 "high_priority_weight": 0, 00:18:17.507 "nvme_adminq_poll_period_us": 10000, 00:18:17.507 "nvme_ioq_poll_period_us": 0, 00:18:17.507 "io_queue_requests": 0, 00:18:17.507 "delay_cmd_submit": true, 00:18:17.507 "transport_retry_count": 4, 00:18:17.507 "bdev_retry_count": 3, 00:18:17.507 "transport_ack_timeout": 0, 00:18:17.507 "ctrlr_loss_timeout_sec": 0, 00:18:17.508 "reconnect_delay_sec": 0, 00:18:17.508 "fast_io_fail_timeout_sec": 0, 00:18:17.508 "disable_auto_failback": false, 00:18:17.508 "generate_uuids": false, 00:18:17.508 "transport_tos": 0, 00:18:17.508 "nvme_error_stat": false, 00:18:17.508 "rdma_srq_size": 0, 00:18:17.508 "io_path_stat": false, 00:18:17.508 "allow_accel_sequence": false, 00:18:17.508 "rdma_max_cq_size": 0, 00:18:17.508 "rdma_cm_event_timeout_ms": 0, 00:18:17.508 "dhchap_digests": [ 00:18:17.508 "sha256", 00:18:17.508 "sha384", 00:18:17.508 "sha512" 00:18:17.508 ], 00:18:17.508 "dhchap_dhgroups": [ 00:18:17.508 "null", 00:18:17.508 "ffdhe2048", 00:18:17.508 "ffdhe3072", 00:18:17.508 "ffdhe4096", 00:18:17.508 "ffdhe6144", 00:18:17.508 "ffdhe8192" 00:18:17.508 ] 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "bdev_nvme_set_hotplug", 00:18:17.508 "params": { 00:18:17.508 "period_us": 100000, 00:18:17.508 "enable": false 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "bdev_malloc_create", 00:18:17.508 "params": { 00:18:17.508 "name": "malloc0", 00:18:17.508 "num_blocks": 8192, 00:18:17.508 "block_size": 4096, 00:18:17.508 "physical_block_size": 4096, 00:18:17.508 "uuid": "86b8c12a-52a2-48f1-901f-97f30a925804", 00:18:17.508 "optimal_io_boundary": 0 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "bdev_wait_for_examine" 00:18:17.508 } 00:18:17.508 ] 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "subsystem": "nbd", 00:18:17.508 "config": [] 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "subsystem": "scheduler", 00:18:17.508 "config": [ 00:18:17.508 { 00:18:17.508 "method": "framework_set_scheduler", 00:18:17.508 "params": { 00:18:17.508 "name": "static" 00:18:17.508 } 00:18:17.508 } 00:18:17.508 ] 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "subsystem": "nvmf", 00:18:17.508 "config": [ 00:18:17.508 { 00:18:17.508 "method": "nvmf_set_config", 00:18:17.508 "params": { 00:18:17.508 "discovery_filter": "match_any", 00:18:17.508 "admin_cmd_passthru": { 00:18:17.508 "identify_ctrlr": false 00:18:17.508 } 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_set_max_subsystems", 00:18:17.508 "params": { 00:18:17.508 "max_subsystems": 1024 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_set_crdt", 00:18:17.508 "params": { 00:18:17.508 "crdt1": 0, 00:18:17.508 "crdt2": 0, 00:18:17.508 "crdt3": 0 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_create_transport", 00:18:17.508 "params": { 00:18:17.508 "trtype": "TCP", 00:18:17.508 "max_queue_depth": 128, 00:18:17.508 "max_io_qpairs_per_ctrlr": 127, 00:18:17.508 "in_capsule_data_size": 4096, 00:18:17.508 "max_io_size": 131072, 00:18:17.508 "io_unit_size": 131072, 00:18:17.508 "max_aq_depth": 128, 00:18:17.508 "num_shared_buffers": 511, 00:18:17.508 "buf_cache_size": 4294967295, 00:18:17.508 "dif_insert_or_strip": false, 00:18:17.508 "zcopy": false, 00:18:17.508 "c2h_success": false, 00:18:17.508 "sock_priority": 0, 00:18:17.508 "abort_timeout_sec": 1, 00:18:17.508 "ack_timeout": 0, 00:18:17.508 "data_wr_pool_size": 0 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_create_subsystem", 00:18:17.508 "params": { 00:18:17.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.508 "allow_any_host": false, 00:18:17.508 "serial_number": "00000000000000000000", 00:18:17.508 "model_number": "SPDK bdev Controller", 00:18:17.508 "max_namespaces": 32, 00:18:17.508 "min_cntlid": 1, 00:18:17.508 "max_cntlid": 65519, 00:18:17.508 "ana_reporting": false 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_subsystem_add_host", 00:18:17.508 "params": { 00:18:17.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.508 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.508 "psk": "key0" 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_subsystem_add_ns", 00:18:17.508 "params": { 00:18:17.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.508 "namespace": { 00:18:17.508 "nsid": 1, 00:18:17.508 "bdev_name": "malloc0", 00:18:17.508 "nguid": "86B8C12A52A248F1901F97F30A925804", 00:18:17.508 "uuid": "86b8c12a-52a2-48f1-901f-97f30a925804", 00:18:17.508 "no_auto_visible": false 00:18:17.508 } 00:18:17.508 } 00:18:17.508 }, 00:18:17.508 { 00:18:17.508 "method": "nvmf_subsystem_add_listener", 00:18:17.508 "params": { 00:18:17.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.508 "listen_address": { 00:18:17.508 "trtype": "TCP", 00:18:17.508 "adrfam": "IPv4", 00:18:17.508 "traddr": "10.0.0.2", 00:18:17.508 "trsvcid": "4420" 00:18:17.508 }, 00:18:17.508 "secure_channel": true 00:18:17.508 } 00:18:17.508 } 00:18:17.508 ] 00:18:17.508 } 00:18:17.508 ] 00:18:17.508 }' 00:18:17.508 17:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:17.767 "subsystems": [ 00:18:17.767 { 00:18:17.767 "subsystem": "keyring", 00:18:17.767 "config": [ 00:18:17.767 { 00:18:17.767 "method": "keyring_file_add_key", 00:18:17.767 "params": { 00:18:17.767 "name": "key0", 00:18:17.767 "path": "/tmp/tmp.zjhFYg5Uq0" 00:18:17.767 } 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "iobuf", 00:18:17.767 "config": [ 00:18:17.767 { 00:18:17.767 "method": "iobuf_set_options", 00:18:17.767 "params": { 00:18:17.767 "small_pool_count": 8192, 00:18:17.767 "large_pool_count": 1024, 00:18:17.767 "small_bufsize": 8192, 00:18:17.767 "large_bufsize": 135168 00:18:17.767 } 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "sock", 00:18:17.767 "config": [ 00:18:17.767 { 00:18:17.767 "method": "sock_set_default_impl", 00:18:17.767 "params": { 00:18:17.767 "impl_name": "posix" 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "sock_impl_set_options", 00:18:17.767 "params": { 00:18:17.767 "impl_name": "ssl", 00:18:17.767 "recv_buf_size": 4096, 00:18:17.767 "send_buf_size": 4096, 00:18:17.767 "enable_recv_pipe": true, 00:18:17.767 "enable_quickack": false, 00:18:17.767 "enable_placement_id": 0, 00:18:17.767 "enable_zerocopy_send_server": true, 00:18:17.767 "enable_zerocopy_send_client": false, 00:18:17.767 "zerocopy_threshold": 0, 00:18:17.767 "tls_version": 0, 00:18:17.767 "enable_ktls": false 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "sock_impl_set_options", 00:18:17.767 "params": { 00:18:17.767 "impl_name": "posix", 00:18:17.767 "recv_buf_size": 2097152, 00:18:17.767 "send_buf_size": 2097152, 00:18:17.767 "enable_recv_pipe": true, 00:18:17.767 "enable_quickack": false, 00:18:17.767 "enable_placement_id": 0, 00:18:17.767 "enable_zerocopy_send_server": true, 00:18:17.767 "enable_zerocopy_send_client": false, 00:18:17.767 "zerocopy_threshold": 0, 00:18:17.767 "tls_version": 0, 00:18:17.767 "enable_ktls": false 00:18:17.767 } 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "vmd", 00:18:17.767 "config": [] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "accel", 00:18:17.767 "config": [ 00:18:17.767 { 00:18:17.767 "method": "accel_set_options", 00:18:17.767 "params": { 00:18:17.767 "small_cache_size": 128, 00:18:17.767 "large_cache_size": 16, 00:18:17.767 "task_count": 2048, 00:18:17.767 "sequence_count": 2048, 00:18:17.767 "buf_count": 2048 00:18:17.767 } 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "bdev", 00:18:17.767 "config": [ 00:18:17.767 { 00:18:17.767 "method": "bdev_set_options", 00:18:17.767 "params": { 00:18:17.767 "bdev_io_pool_size": 65535, 00:18:17.767 "bdev_io_cache_size": 256, 00:18:17.767 "bdev_auto_examine": true, 00:18:17.767 "iobuf_small_cache_size": 128, 00:18:17.767 "iobuf_large_cache_size": 16 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_raid_set_options", 00:18:17.767 "params": { 00:18:17.767 "process_window_size_kb": 1024 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_iscsi_set_options", 00:18:17.767 "params": { 00:18:17.767 "timeout_sec": 30 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_nvme_set_options", 00:18:17.767 "params": { 00:18:17.767 "action_on_timeout": "none", 00:18:17.767 "timeout_us": 0, 00:18:17.767 "timeout_admin_us": 0, 00:18:17.767 "keep_alive_timeout_ms": 10000, 00:18:17.767 "arbitration_burst": 0, 00:18:17.767 "low_priority_weight": 0, 00:18:17.767 "medium_priority_weight": 0, 00:18:17.767 "high_priority_weight": 0, 00:18:17.767 "nvme_adminq_poll_period_us": 10000, 00:18:17.767 "nvme_ioq_poll_period_us": 0, 00:18:17.767 "io_queue_requests": 512, 00:18:17.767 "delay_cmd_submit": true, 00:18:17.767 "transport_retry_count": 4, 00:18:17.767 "bdev_retry_count": 3, 00:18:17.767 "transport_ack_timeout": 0, 00:18:17.767 "ctrlr_loss_timeout_sec": 0, 00:18:17.767 "reconnect_delay_sec": 0, 00:18:17.767 "fast_io_fail_timeout_sec": 0, 00:18:17.767 "disable_auto_failback": false, 00:18:17.767 "generate_uuids": false, 00:18:17.767 "transport_tos": 0, 00:18:17.767 "nvme_error_stat": false, 00:18:17.767 "rdma_srq_size": 0, 00:18:17.767 "io_path_stat": false, 00:18:17.767 "allow_accel_sequence": false, 00:18:17.767 "rdma_max_cq_size": 0, 00:18:17.767 "rdma_cm_event_timeout_ms": 0, 00:18:17.767 "dhchap_digests": [ 00:18:17.767 "sha256", 00:18:17.767 "sha384", 00:18:17.767 "sha512" 00:18:17.767 ], 00:18:17.767 "dhchap_dhgroups": [ 00:18:17.767 "null", 00:18:17.767 "ffdhe2048", 00:18:17.767 "ffdhe3072", 00:18:17.767 "ffdhe4096", 00:18:17.767 "ffdhe6144", 00:18:17.767 "ffdhe8192" 00:18:17.767 ] 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_nvme_attach_controller", 00:18:17.767 "params": { 00:18:17.767 "name": "nvme0", 00:18:17.767 "trtype": "TCP", 00:18:17.767 "adrfam": "IPv4", 00:18:17.767 "traddr": "10.0.0.2", 00:18:17.767 "trsvcid": "4420", 00:18:17.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.767 "prchk_reftag": false, 00:18:17.767 "prchk_guard": false, 00:18:17.767 "ctrlr_loss_timeout_sec": 0, 00:18:17.767 "reconnect_delay_sec": 0, 00:18:17.767 "fast_io_fail_timeout_sec": 0, 00:18:17.767 "psk": "key0", 00:18:17.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.767 "hdgst": false, 00:18:17.767 "ddgst": false 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_nvme_set_hotplug", 00:18:17.767 "params": { 00:18:17.767 "period_us": 100000, 00:18:17.767 "enable": false 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_enable_histogram", 00:18:17.767 "params": { 00:18:17.767 "name": "nvme0n1", 00:18:17.767 "enable": true 00:18:17.767 } 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "method": "bdev_wait_for_examine" 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }, 00:18:17.767 { 00:18:17.767 "subsystem": "nbd", 00:18:17.767 "config": [] 00:18:17.767 } 00:18:17.767 ] 00:18:17.767 }' 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2259342 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2259342 ']' 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2259342 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259342 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259342' 00:18:17.767 killing process with pid 2259342 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2259342 00:18:17.767 Received shutdown signal, test time was about 1.000000 seconds 00:18:17.767 00:18:17.767 Latency(us) 00:18:17.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.767 =================================================================================================================== 00:18:17.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.767 17:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2259342 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2259189 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2259189 ']' 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2259189 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259189 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259189' 00:18:18.027 killing process with pid 2259189 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2259189 00:18:18.027 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2259189 00:18:18.286 17:41:13 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:18.286 17:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.286 17:41:13 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:18.286 "subsystems": [ 00:18:18.286 { 00:18:18.286 "subsystem": "keyring", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "keyring_file_add_key", 00:18:18.286 "params": { 00:18:18.286 "name": "key0", 00:18:18.286 "path": "/tmp/tmp.zjhFYg5Uq0" 00:18:18.286 } 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "iobuf", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "iobuf_set_options", 00:18:18.286 "params": { 00:18:18.286 "small_pool_count": 8192, 00:18:18.286 "large_pool_count": 1024, 00:18:18.286 "small_bufsize": 8192, 00:18:18.286 "large_bufsize": 135168 00:18:18.286 } 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "sock", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "sock_set_default_impl", 00:18:18.286 "params": { 00:18:18.286 "impl_name": "posix" 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "sock_impl_set_options", 00:18:18.286 "params": { 00:18:18.286 "impl_name": "ssl", 00:18:18.286 "recv_buf_size": 4096, 00:18:18.286 "send_buf_size": 4096, 00:18:18.286 "enable_recv_pipe": true, 00:18:18.286 "enable_quickack": false, 00:18:18.286 "enable_placement_id": 0, 00:18:18.286 "enable_zerocopy_send_server": true, 00:18:18.286 "enable_zerocopy_send_client": false, 00:18:18.286 "zerocopy_threshold": 0, 00:18:18.286 "tls_version": 0, 00:18:18.286 "enable_ktls": false 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "sock_impl_set_options", 00:18:18.286 "params": { 00:18:18.286 "impl_name": "posix", 00:18:18.286 "recv_buf_size": 2097152, 00:18:18.286 "send_buf_size": 2097152, 00:18:18.286 "enable_recv_pipe": true, 00:18:18.286 "enable_quickack": false, 00:18:18.286 "enable_placement_id": 0, 00:18:18.286 "enable_zerocopy_send_server": true, 00:18:18.286 "enable_zerocopy_send_client": false, 00:18:18.286 "zerocopy_threshold": 0, 00:18:18.286 "tls_version": 0, 00:18:18.286 "enable_ktls": false 00:18:18.286 } 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "vmd", 00:18:18.286 "config": [] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "accel", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "accel_set_options", 00:18:18.286 "params": { 00:18:18.286 "small_cache_size": 128, 00:18:18.286 "large_cache_size": 16, 00:18:18.286 "task_count": 2048, 00:18:18.286 "sequence_count": 2048, 00:18:18.286 "buf_count": 2048 00:18:18.286 } 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "bdev", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "bdev_set_options", 00:18:18.286 "params": { 00:18:18.286 "bdev_io_pool_size": 65535, 00:18:18.286 "bdev_io_cache_size": 256, 00:18:18.286 "bdev_auto_examine": true, 00:18:18.286 "iobuf_small_cache_size": 128, 00:18:18.286 "iobuf_large_cache_size": 16 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_raid_set_options", 00:18:18.286 "params": { 00:18:18.286 "process_window_size_kb": 1024 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_iscsi_set_options", 00:18:18.286 "params": { 00:18:18.286 "timeout_sec": 30 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_nvme_set_options", 00:18:18.286 "params": { 00:18:18.286 "action_on_timeout": "none", 00:18:18.286 "timeout_us": 0, 00:18:18.286 "timeout_admin_us": 0, 00:18:18.286 "keep_alive_timeout_ms": 10000, 00:18:18.286 "arbitration_burst": 0, 00:18:18.286 "low_priority_weight": 0, 00:18:18.286 "medium_priority_weight": 0, 00:18:18.286 "high_priority_weight": 0, 00:18:18.286 "nvme_adminq_poll_period_us": 10000, 00:18:18.286 "nvme_ioq_poll_period_us": 0, 00:18:18.286 "io_queue_requests": 0, 00:18:18.286 "delay_cmd_submit": true, 00:18:18.286 "transport_retry_count": 4, 00:18:18.286 "bdev_retry_count": 3, 00:18:18.286 "transport_ack_timeout": 0, 00:18:18.286 "ctrlr_loss_timeout_sec": 0, 00:18:18.286 "reconnect_delay_sec": 0, 00:18:18.286 "fast_io_fail_timeout_sec": 0, 00:18:18.286 "disable_auto_failback": false, 00:18:18.286 "generate_uuids": false, 00:18:18.286 "transport_tos": 0, 00:18:18.286 "nvme_error_stat": false, 00:18:18.286 "rdma_srq_size": 0, 00:18:18.286 "io_path_stat": false, 00:18:18.286 "allow_accel_sequence": false, 00:18:18.286 "rdma_max_cq_size": 0, 00:18:18.286 "rdma_cm_event_timeout_ms": 0, 00:18:18.286 "dhchap_digests": [ 00:18:18.286 "sha256", 00:18:18.286 "sha384", 00:18:18.286 "sha512" 00:18:18.286 ], 00:18:18.286 "dhchap_dhgroups": [ 00:18:18.286 "null", 00:18:18.286 "ffdhe2048", 00:18:18.286 "ffdhe3072", 00:18:18.286 "ffdhe4096", 00:18:18.286 "ffdhe6144", 00:18:18.286 "ffdhe8192" 00:18:18.286 ] 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_nvme_set_hotplug", 00:18:18.286 "params": { 00:18:18.286 "period_us": 100000, 00:18:18.286 "enable": false 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_malloc_create", 00:18:18.286 "params": { 00:18:18.286 "name": "malloc0", 00:18:18.286 "num_blocks": 8192, 00:18:18.286 "block_size": 4096, 00:18:18.286 "physical_block_size": 4096, 00:18:18.286 "uuid": "86b8c12a-52a2-48f1-901f-97f30a925804", 00:18:18.286 "optimal_io_boundary": 0 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "bdev_wait_for_examine" 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "nbd", 00:18:18.286 "config": [] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "scheduler", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "framework_set_scheduler", 00:18:18.286 "params": { 00:18:18.286 "name": "static" 00:18:18.286 } 00:18:18.286 } 00:18:18.286 ] 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "subsystem": "nvmf", 00:18:18.286 "config": [ 00:18:18.286 { 00:18:18.286 "method": "nvmf_set_config", 00:18:18.286 "params": { 00:18:18.286 "discovery_filter": "match_any", 00:18:18.286 "admin_cmd_passthru": { 00:18:18.286 "identify_ctrlr": false 00:18:18.286 } 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_set_max_subsystems", 00:18:18.286 "params": { 00:18:18.286 "max_subsystems": 1024 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_set_crdt", 00:18:18.286 "params": { 00:18:18.286 "crdt1": 0, 00:18:18.286 "crdt2": 0, 00:18:18.286 "crdt3": 0 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_create_transport", 00:18:18.286 "params": { 00:18:18.286 "trtype": "TCP", 00:18:18.286 "max_queue_depth": 128, 00:18:18.286 "max_io_qpairs_per_ctrlr": 127, 00:18:18.286 "in_capsule_data_size": 4096, 00:18:18.286 "max_io_size": 131072, 00:18:18.286 "io_unit_size": 131072, 00:18:18.286 "max_aq_depth": 128, 00:18:18.286 "num_shared_buffers": 511, 00:18:18.286 "buf_cache_size": 4294967295, 00:18:18.286 "dif_insert_or_strip": false, 00:18:18.286 "zcopy": false, 00:18:18.286 "c2h_success": false, 00:18:18.286 "sock_priority": 0, 00:18:18.286 "abort_timeout_sec": 1, 00:18:18.286 "ack_timeout": 0, 00:18:18.286 "data_wr_pool_size": 0 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_create_subsystem", 00:18:18.286 "params": { 00:18:18.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.286 "allow_any_host": false, 00:18:18.286 "serial_number": "00000000000000000000", 00:18:18.286 "model_number": "SPDK bdev Controller", 00:18:18.286 "max_namespaces": 32, 00:18:18.286 "min_cntlid": 1, 00:18:18.286 "max_cntlid": 65519, 00:18:18.286 "ana_reporting": false 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_subsystem_add_host", 00:18:18.286 "params": { 00:18:18.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.286 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.286 "psk": "key0" 00:18:18.286 } 00:18:18.286 }, 00:18:18.286 { 00:18:18.286 "method": "nvmf_subsystem_add_ns", 00:18:18.286 "params": { 00:18:18.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.286 "namespace": { 00:18:18.286 "nsid": 1, 00:18:18.286 "bdev_name": "malloc0", 00:18:18.286 "nguid": "86B8C12A52A248F1901F97F30A925804", 00:18:18.286 "uuid": "86b8c12a-52a2-48f1-901f-97f30a925804", 00:18:18.286 "no_auto_visible": false 00:18:18.286 } 00:18:18.286 } 00:18:18.286 }, 00:18:18.287 { 00:18:18.287 "method": "nvmf_subsystem_add_listener", 00:18:18.287 "params": { 00:18:18.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.287 "listen_address": { 00:18:18.287 "trtype": "TCP", 00:18:18.287 "adrfam": "IPv4", 00:18:18.287 "traddr": "10.0.0.2", 00:18:18.287 "trsvcid": "4420" 00:18:18.287 }, 00:18:18.287 "secure_channel": true 00:18:18.287 } 00:18:18.287 } 00:18:18.287 ] 00:18:18.287 } 00:18:18.287 ] 00:18:18.287 }' 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2259752 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2259752 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2259752 ']' 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.287 17:41:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.546 [2024-07-15 17:41:13.436905] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:18.546 [2024-07-15 17:41:13.437001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.547 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.547 [2024-07-15 17:41:13.510228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.547 [2024-07-15 17:41:13.623414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.547 [2024-07-15 17:41:13.623470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.547 [2024-07-15 17:41:13.623484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.547 [2024-07-15 17:41:13.623495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.547 [2024-07-15 17:41:13.623505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.547 [2024-07-15 17:41:13.623584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.807 [2024-07-15 17:41:13.867266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.807 [2024-07-15 17:41:13.899306] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.807 [2024-07-15 17:41:13.907099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2259901 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2259901 /var/tmp/bdevperf.sock 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2259901 ']' 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.373 17:41:14 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:19.373 "subsystems": [ 00:18:19.373 { 00:18:19.373 "subsystem": "keyring", 00:18:19.373 "config": [ 00:18:19.373 { 00:18:19.373 "method": "keyring_file_add_key", 00:18:19.373 "params": { 00:18:19.373 "name": "key0", 00:18:19.374 "path": "/tmp/tmp.zjhFYg5Uq0" 00:18:19.374 } 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "iobuf", 00:18:19.374 "config": [ 00:18:19.374 { 00:18:19.374 "method": "iobuf_set_options", 00:18:19.374 "params": { 00:18:19.374 "small_pool_count": 8192, 00:18:19.374 "large_pool_count": 1024, 00:18:19.374 "small_bufsize": 8192, 00:18:19.374 "large_bufsize": 135168 00:18:19.374 } 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "sock", 00:18:19.374 "config": [ 00:18:19.374 { 00:18:19.374 "method": "sock_set_default_impl", 00:18:19.374 "params": { 00:18:19.374 "impl_name": "posix" 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "sock_impl_set_options", 00:18:19.374 "params": { 00:18:19.374 "impl_name": "ssl", 00:18:19.374 "recv_buf_size": 4096, 00:18:19.374 "send_buf_size": 4096, 00:18:19.374 "enable_recv_pipe": true, 00:18:19.374 "enable_quickack": false, 00:18:19.374 "enable_placement_id": 0, 00:18:19.374 "enable_zerocopy_send_server": true, 00:18:19.374 "enable_zerocopy_send_client": false, 00:18:19.374 "zerocopy_threshold": 0, 00:18:19.374 "tls_version": 0, 00:18:19.374 "enable_ktls": false 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "sock_impl_set_options", 00:18:19.374 "params": { 00:18:19.374 "impl_name": "posix", 00:18:19.374 "recv_buf_size": 2097152, 00:18:19.374 "send_buf_size": 2097152, 00:18:19.374 "enable_recv_pipe": true, 00:18:19.374 "enable_quickack": false, 00:18:19.374 "enable_placement_id": 0, 00:18:19.374 "enable_zerocopy_send_server": true, 00:18:19.374 "enable_zerocopy_send_client": false, 00:18:19.374 "zerocopy_threshold": 0, 00:18:19.374 "tls_version": 0, 00:18:19.374 "enable_ktls": false 00:18:19.374 } 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "vmd", 00:18:19.374 "config": [] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "accel", 00:18:19.374 "config": [ 00:18:19.374 { 00:18:19.374 "method": "accel_set_options", 00:18:19.374 "params": { 00:18:19.374 "small_cache_size": 128, 00:18:19.374 "large_cache_size": 16, 00:18:19.374 "task_count": 2048, 00:18:19.374 "sequence_count": 2048, 00:18:19.374 "buf_count": 2048 00:18:19.374 } 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "bdev", 00:18:19.374 "config": [ 00:18:19.374 { 00:18:19.374 "method": "bdev_set_options", 00:18:19.374 "params": { 00:18:19.374 "bdev_io_pool_size": 65535, 00:18:19.374 "bdev_io_cache_size": 256, 00:18:19.374 "bdev_auto_examine": true, 00:18:19.374 "iobuf_small_cache_size": 128, 00:18:19.374 "iobuf_large_cache_size": 16 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_raid_set_options", 00:18:19.374 "params": { 00:18:19.374 "process_window_size_kb": 1024 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_iscsi_set_options", 00:18:19.374 "params": { 00:18:19.374 "timeout_sec": 30 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_nvme_set_options", 00:18:19.374 "params": { 00:18:19.374 "action_on_timeout": "none", 00:18:19.374 "timeout_us": 0, 00:18:19.374 "timeout_admin_us": 0, 00:18:19.374 "keep_alive_timeout_ms": 10000, 00:18:19.374 "arbitration_burst": 0, 00:18:19.374 "low_priority_weight": 0, 00:18:19.374 "medium_priority_weight": 0, 00:18:19.374 "high_priority_weight": 0, 00:18:19.374 "nvme_adminq_poll_period_us": 10000, 00:18:19.374 "nvme_ioq_poll_period_us": 0, 00:18:19.374 "io_queue_requests": 512, 00:18:19.374 "delay_cmd_submit": true, 00:18:19.374 "transport_retry_count": 4, 00:18:19.374 "bdev_retry_count": 3, 00:18:19.374 "transport_ack_timeout": 0, 00:18:19.374 "ctrlr_loss_timeout_sec": 0, 00:18:19.374 "reconnect_delay_sec": 0, 00:18:19.374 "fast_io_fail_timeout_sec": 0, 00:18:19.374 "disable_auto_failback": false, 00:18:19.374 "generate_uuids": false, 00:18:19.374 "transport_tos": 0, 00:18:19.374 "nvme_error_stat": false, 00:18:19.374 "rdma_srq_size": 0, 00:18:19.374 "io_path_stat": false, 00:18:19.374 "allow_accel_sequence": false, 00:18:19.374 "rdma_max_cq_size": 0, 00:18:19.374 "rdma_cm_event_timeout_ms": 0, 00:18:19.374 "dhchap_digests": [ 00:18:19.374 "sha256", 00:18:19.374 "sha384", 00:18:19.374 "sh 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.374 a512" 00:18:19.374 ], 00:18:19.374 "dhchap_dhgroups": [ 00:18:19.374 "null", 00:18:19.374 "ffdhe2048", 00:18:19.374 "ffdhe3072", 00:18:19.374 "ffdhe4096", 00:18:19.374 "ffdhe6144", 00:18:19.374 "ffdhe8192" 00:18:19.374 ] 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_nvme_attach_controller", 00:18:19.374 "params": { 00:18:19.374 "name": "nvme0", 00:18:19.374 "trtype": "TCP", 00:18:19.374 "adrfam": "IPv4", 00:18:19.374 "traddr": "10.0.0.2", 00:18:19.374 "trsvcid": "4420", 00:18:19.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.374 "prchk_reftag": false, 00:18:19.374 "prchk_guard": false, 00:18:19.374 "ctrlr_loss_timeout_sec": 0, 00:18:19.374 "reconnect_delay_sec": 0, 00:18:19.374 "fast_io_fail_timeout_sec": 0, 00:18:19.374 "psk": "key0", 00:18:19.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.374 "hdgst": false, 00:18:19.374 "ddgst": false 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_nvme_set_hotplug", 00:18:19.374 "params": { 00:18:19.374 "period_us": 100000, 00:18:19.374 "enable": false 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_enable_histogram", 00:18:19.374 "params": { 00:18:19.374 "name": "nvme0n1", 00:18:19.374 "enable": true 00:18:19.374 } 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "method": "bdev_wait_for_examine" 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }, 00:18:19.374 { 00:18:19.374 "subsystem": "nbd", 00:18:19.374 "config": [] 00:18:19.374 } 00:18:19.374 ] 00:18:19.374 }' 00:18:19.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.374 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.374 17:41:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.374 [2024-07-15 17:41:14.482402] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:19.374 [2024-07-15 17:41:14.482488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259901 ] 00:18:19.634 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.634 [2024-07-15 17:41:14.540764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.634 [2024-07-15 17:41:14.649224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.894 [2024-07-15 17:41:14.834582] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.461 17:41:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.461 17:41:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.461 17:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.461 17:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:20.721 17:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.721 17:41:15 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.721 Running I/O for 1 seconds... 00:18:22.104 00:18:22.104 Latency(us) 00:18:22.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.104 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:22.104 Verification LBA range: start 0x0 length 0x2000 00:18:22.104 nvme0n1 : 1.05 2223.62 8.69 0.00 0.00 56459.59 9611.95 86604.61 00:18:22.104 =================================================================================================================== 00:18:22.104 Total : 2223.62 8.69 0.00 0.00 56459.59 9611.95 86604.61 00:18:22.104 0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.104 nvmf_trace.0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2259901 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2259901 ']' 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2259901 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.104 17:41:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259901 00:18:22.104 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.104 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.104 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259901' 00:18:22.104 killing process with pid 2259901 00:18:22.104 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2259901 00:18:22.104 Received shutdown signal, test time was about 1.000000 seconds 00:18:22.104 00:18:22.104 Latency(us) 00:18:22.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.104 =================================================================================================================== 00:18:22.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.104 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2259901 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.363 rmmod nvme_tcp 00:18:22.363 rmmod nvme_fabrics 00:18:22.363 rmmod nvme_keyring 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2259752 ']' 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2259752 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2259752 ']' 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2259752 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259752 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259752' 00:18:22.363 killing process with pid 2259752 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2259752 00:18:22.363 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2259752 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.622 17:41:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.154 17:41:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.154 17:41:19 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sznY3dPxBe /tmp/tmp.4DthwKbOVP /tmp/tmp.zjhFYg5Uq0 00:18:25.154 00:18:25.154 real 1m23.897s 00:18:25.154 user 2m13.086s 00:18:25.154 sys 0m28.855s 00:18:25.154 17:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.154 17:41:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.154 ************************************ 00:18:25.154 END TEST nvmf_tls 00:18:25.154 ************************************ 00:18:25.154 17:41:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.154 17:41:19 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:25.154 17:41:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.154 17:41:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.154 17:41:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.154 ************************************ 00:18:25.154 START TEST nvmf_fips 00:18:25.154 ************************************ 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:25.154 * Looking for test storage... 00:18:25.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:25.154 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:25.155 Error setting digest 00:18:25.155 003279C01C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:25.155 003279C01C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.155 17:41:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.055 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.055 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.055 17:41:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:18:27.055 00:18:27.055 --- 10.0.0.2 ping statistics --- 00:18:27.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.055 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:18:27.055 00:18:27.055 --- 10.0.0.1 ping statistics --- 00:18:27.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.055 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:27.055 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2262265 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2262265 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2262265 ']' 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.056 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 [2024-07-15 17:41:22.175975] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:27.056 [2024-07-15 17:41:22.176065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.314 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.314 [2024-07-15 17:41:22.245347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.314 [2024-07-15 17:41:22.360901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.314 [2024-07-15 17:41:22.360963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.314 [2024-07-15 17:41:22.360989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.314 [2024-07-15 17:41:22.361010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.314 [2024-07-15 17:41:22.361022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.314 [2024-07-15 17:41:22.361052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:27.573 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.832 [2024-07-15 17:41:22.738661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.832 [2024-07-15 17:41:22.754634] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.832 [2024-07-15 17:41:22.754922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.832 [2024-07-15 17:41:22.787299] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:27.832 malloc0 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2262299 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2262299 /var/tmp/bdevperf.sock 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2262299 ']' 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.832 17:41:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.832 [2024-07-15 17:41:22.876545] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:27.832 [2024-07-15 17:41:22.876618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262299 ] 00:18:27.832 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.832 [2024-07-15 17:41:22.933720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.089 [2024-07-15 17:41:23.041624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.089 17:41:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.089 17:41:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:28.089 17:41:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:28.347 [2024-07-15 17:41:23.427679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.347 [2024-07-15 17:41:23.427790] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.604 TLSTESTn1 00:18:28.604 17:41:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.604 Running I/O for 10 seconds... 00:18:38.608 00:18:38.608 Latency(us) 00:18:38.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.608 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.608 Verification LBA range: start 0x0 length 0x2000 00:18:38.608 TLSTESTn1 : 10.05 2365.01 9.24 0.00 0.00 53973.71 11650.84 82721.00 00:18:38.608 =================================================================================================================== 00:18:38.608 Total : 2365.01 9.24 0.00 0.00 53973.71 11650.84 82721.00 00:18:38.608 0 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:38.608 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:38.608 nvmf_trace.0 00:18:38.866 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:38.866 17:41:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2262299 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2262299 ']' 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2262299 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2262299 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2262299' 00:18:38.867 killing process with pid 2262299 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2262299 00:18:38.867 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.867 00:18:38.867 Latency(us) 00:18:38.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.867 =================================================================================================================== 00:18:38.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.867 [2024-07-15 17:41:33.819355] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:38.867 17:41:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2262299 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.127 rmmod nvme_tcp 00:18:39.127 rmmod nvme_fabrics 00:18:39.127 rmmod nvme_keyring 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2262265 ']' 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2262265 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2262265 ']' 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2262265 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2262265 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2262265' 00:18:39.127 killing process with pid 2262265 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2262265 00:18:39.127 [2024-07-15 17:41:34.179971] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:39.127 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2262265 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.388 17:41:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.923 17:41:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.923 17:41:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.923 00:18:41.923 real 0m16.787s 00:18:41.923 user 0m20.878s 00:18:41.923 sys 0m6.439s 00:18:41.923 17:41:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.923 17:41:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:41.923 ************************************ 00:18:41.923 END TEST nvmf_fips 00:18:41.923 ************************************ 00:18:41.923 17:41:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.923 17:41:36 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:41.923 17:41:36 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:41.923 17:41:36 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:41.923 17:41:36 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:41.923 17:41:36 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.923 17:41:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.826 17:41:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:43.827 17:41:38 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:43.827 17:41:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:43.827 17:41:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.827 17:41:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.827 ************************************ 00:18:43.827 START TEST nvmf_perf_adq 00:18:43.827 ************************************ 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:43.827 * Looking for test storage... 00:18:43.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.827 17:41:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:45.732 17:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:46.298 17:41:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:48.826 17:41:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:54.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:54.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:54.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:54.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:54.104 00:18:54.104 --- 10.0.0.2 ping statistics --- 00:18:54.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.104 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:18:54.104 00:18:54.104 --- 10.0.0.1 ping statistics --- 00:18:54.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.104 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2268165 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2268165 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2268165 ']' 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.104 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.104 [2024-07-15 17:41:48.636594] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:18:54.104 [2024-07-15 17:41:48.636667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.104 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.104 [2024-07-15 17:41:48.699618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:54.104 [2024-07-15 17:41:48.806695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.104 [2024-07-15 17:41:48.806748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.104 [2024-07-15 17:41:48.806776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.104 [2024-07-15 17:41:48.806787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.104 [2024-07-15 17:41:48.806797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.104 [2024-07-15 17:41:48.806861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.104 [2024-07-15 17:41:48.806955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.105 [2024-07-15 17:41:48.806985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.105 [2024-07-15 17:41:48.806988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 [2024-07-15 17:41:49.024571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 Malloc1 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.105 [2024-07-15 17:41:49.075695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2268200 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:54.105 17:41:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:54.105 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:56.019 "tick_rate": 2700000000, 00:18:56.019 "poll_groups": [ 00:18:56.019 { 00:18:56.019 "name": "nvmf_tgt_poll_group_000", 00:18:56.019 "admin_qpairs": 1, 00:18:56.019 "io_qpairs": 1, 00:18:56.019 "current_admin_qpairs": 1, 00:18:56.019 "current_io_qpairs": 1, 00:18:56.019 "pending_bdev_io": 0, 00:18:56.019 "completed_nvme_io": 16136, 00:18:56.019 "transports": [ 00:18:56.019 { 00:18:56.019 "trtype": "TCP" 00:18:56.019 } 00:18:56.019 ] 00:18:56.019 }, 00:18:56.019 { 00:18:56.019 "name": "nvmf_tgt_poll_group_001", 00:18:56.019 "admin_qpairs": 0, 00:18:56.019 "io_qpairs": 1, 00:18:56.019 "current_admin_qpairs": 0, 00:18:56.019 "current_io_qpairs": 1, 00:18:56.019 "pending_bdev_io": 0, 00:18:56.019 "completed_nvme_io": 21321, 00:18:56.019 "transports": [ 00:18:56.019 { 00:18:56.019 "trtype": "TCP" 00:18:56.019 } 00:18:56.019 ] 00:18:56.019 }, 00:18:56.019 { 00:18:56.019 "name": "nvmf_tgt_poll_group_002", 00:18:56.019 "admin_qpairs": 0, 00:18:56.019 "io_qpairs": 1, 00:18:56.019 "current_admin_qpairs": 0, 00:18:56.019 "current_io_qpairs": 1, 00:18:56.019 "pending_bdev_io": 0, 00:18:56.019 "completed_nvme_io": 21185, 00:18:56.019 "transports": [ 00:18:56.019 { 00:18:56.019 "trtype": "TCP" 00:18:56.019 } 00:18:56.019 ] 00:18:56.019 }, 00:18:56.019 { 00:18:56.019 "name": "nvmf_tgt_poll_group_003", 00:18:56.019 "admin_qpairs": 0, 00:18:56.019 "io_qpairs": 1, 00:18:56.019 "current_admin_qpairs": 0, 00:18:56.019 "current_io_qpairs": 1, 00:18:56.019 "pending_bdev_io": 0, 00:18:56.019 "completed_nvme_io": 20159, 00:18:56.019 "transports": [ 00:18:56.019 { 00:18:56.019 "trtype": "TCP" 00:18:56.019 } 00:18:56.019 ] 00:18:56.019 } 00:18:56.019 ] 00:18:56.019 }' 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:56.019 17:41:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2268200 00:19:04.132 Initializing NVMe Controllers 00:19:04.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:04.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:04.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:04.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:04.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:04.132 Initialization complete. Launching workers. 00:19:04.132 ======================================================== 00:19:04.132 Latency(us) 00:19:04.132 Device Information : IOPS MiB/s Average min max 00:19:04.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10569.00 41.29 6055.36 2169.23 10158.04 00:19:04.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11119.70 43.44 5756.77 2480.55 7475.12 00:19:04.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11097.60 43.35 5768.38 4888.84 7248.35 00:19:04.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8492.60 33.17 7538.20 1778.68 12447.22 00:19:04.132 ======================================================== 00:19:04.132 Total : 41278.89 161.25 6202.85 1778.68 12447.22 00:19:04.132 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.132 rmmod nvme_tcp 00:19:04.132 rmmod nvme_fabrics 00:19:04.132 rmmod nvme_keyring 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:04.132 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2268165 ']' 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2268165 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2268165 ']' 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2268165 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2268165 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2268165' 00:19:04.390 killing process with pid 2268165 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2268165 00:19:04.390 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2268165 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.650 17:41:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.560 17:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.560 17:42:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:06.560 17:42:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:07.495 17:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:09.396 17:42:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.665 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:14.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:14.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:14.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:19:14.666 00:19:14.666 --- 10.0.0.2 ping statistics --- 00:19:14.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.666 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:14.666 00:19:14.666 --- 10.0.0.1 ping statistics --- 00:19:14.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.666 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:14.666 net.core.busy_poll = 1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:14.666 net.core.busy_read = 1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2271436 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2271436 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2271436 ']' 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.666 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.666 [2024-07-15 17:42:09.616486] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:14.666 [2024-07-15 17:42:09.616575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.666 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.666 [2024-07-15 17:42:09.680313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.666 [2024-07-15 17:42:09.789476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.666 [2024-07-15 17:42:09.789527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.666 [2024-07-15 17:42:09.789555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.666 [2024-07-15 17:42:09.789566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.666 [2024-07-15 17:42:09.789575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.666 [2024-07-15 17:42:09.789661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.666 [2024-07-15 17:42:09.789727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.666 [2024-07-15 17:42:09.789793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.666 [2024-07-15 17:42:09.789796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 [2024-07-15 17:42:09.993610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 Malloc1 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.925 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.926 [2024-07-15 17:42:10.045399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.926 17:42:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.926 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2271580 00:19:14.926 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:14.926 17:42:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:15.185 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:17.090 "tick_rate": 2700000000, 00:19:17.090 "poll_groups": [ 00:19:17.090 { 00:19:17.090 "name": "nvmf_tgt_poll_group_000", 00:19:17.090 "admin_qpairs": 1, 00:19:17.090 "io_qpairs": 1, 00:19:17.090 "current_admin_qpairs": 1, 00:19:17.090 "current_io_qpairs": 1, 00:19:17.090 "pending_bdev_io": 0, 00:19:17.090 "completed_nvme_io": 25080, 00:19:17.090 "transports": [ 00:19:17.090 { 00:19:17.090 "trtype": "TCP" 00:19:17.090 } 00:19:17.090 ] 00:19:17.090 }, 00:19:17.090 { 00:19:17.090 "name": "nvmf_tgt_poll_group_001", 00:19:17.090 "admin_qpairs": 0, 00:19:17.090 "io_qpairs": 3, 00:19:17.090 "current_admin_qpairs": 0, 00:19:17.090 "current_io_qpairs": 3, 00:19:17.090 "pending_bdev_io": 0, 00:19:17.090 "completed_nvme_io": 27278, 00:19:17.090 "transports": [ 00:19:17.090 { 00:19:17.090 "trtype": "TCP" 00:19:17.090 } 00:19:17.090 ] 00:19:17.090 }, 00:19:17.090 { 00:19:17.090 "name": "nvmf_tgt_poll_group_002", 00:19:17.090 "admin_qpairs": 0, 00:19:17.090 "io_qpairs": 0, 00:19:17.090 "current_admin_qpairs": 0, 00:19:17.090 "current_io_qpairs": 0, 00:19:17.090 "pending_bdev_io": 0, 00:19:17.090 "completed_nvme_io": 0, 00:19:17.090 "transports": [ 00:19:17.090 { 00:19:17.090 "trtype": "TCP" 00:19:17.090 } 00:19:17.090 ] 00:19:17.090 }, 00:19:17.090 { 00:19:17.090 "name": "nvmf_tgt_poll_group_003", 00:19:17.090 "admin_qpairs": 0, 00:19:17.090 "io_qpairs": 0, 00:19:17.090 "current_admin_qpairs": 0, 00:19:17.090 "current_io_qpairs": 0, 00:19:17.090 "pending_bdev_io": 0, 00:19:17.090 "completed_nvme_io": 0, 00:19:17.090 "transports": [ 00:19:17.090 { 00:19:17.090 "trtype": "TCP" 00:19:17.090 } 00:19:17.090 ] 00:19:17.090 } 00:19:17.090 ] 00:19:17.090 }' 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:17.090 17:42:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2271580 00:19:25.246 Initializing NVMe Controllers 00:19:25.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:25.246 Initialization complete. Launching workers. 00:19:25.246 ======================================================== 00:19:25.246 Latency(us) 00:19:25.246 Device Information : IOPS MiB/s Average min max 00:19:25.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13087.70 51.12 4890.48 1584.72 45914.97 00:19:25.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4901.30 19.15 13088.53 2135.38 58258.23 00:19:25.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4662.10 18.21 13773.04 2194.64 60328.77 00:19:25.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4809.30 18.79 13310.56 2362.93 60735.95 00:19:25.246 ======================================================== 00:19:25.246 Total : 27460.40 107.27 9336.41 1584.72 60735.95 00:19:25.246 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.246 rmmod nvme_tcp 00:19:25.246 rmmod nvme_fabrics 00:19:25.246 rmmod nvme_keyring 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2271436 ']' 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2271436 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2271436 ']' 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2271436 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2271436 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2271436' 00:19:25.246 killing process with pid 2271436 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2271436 00:19:25.246 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2271436 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.505 17:42:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.797 17:42:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.797 17:42:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:28.797 00:19:28.797 real 0m44.934s 00:19:28.797 user 2m32.957s 00:19:28.797 sys 0m12.016s 00:19:28.797 17:42:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.797 17:42:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.797 ************************************ 00:19:28.797 END TEST nvmf_perf_adq 00:19:28.797 ************************************ 00:19:28.797 17:42:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:28.797 17:42:23 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:28.797 17:42:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:28.797 17:42:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.797 17:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.797 ************************************ 00:19:28.797 START TEST nvmf_shutdown 00:19:28.797 ************************************ 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:28.797 * Looking for test storage... 00:19:28.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:28.797 ************************************ 00:19:28.797 START TEST nvmf_shutdown_tc1 00:19:28.797 ************************************ 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.797 17:42:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.701 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:19:30.702 00:19:30.702 --- 10.0.0.2 ping statistics --- 00:19:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.702 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:19:30.702 00:19:30.702 --- 10.0.0.1 ping statistics --- 00:19:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.702 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2274870 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2274870 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2274870 ']' 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.702 17:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:30.961 [2024-07-15 17:42:25.845893] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:30.961 [2024-07-15 17:42:25.845984] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.961 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.961 [2024-07-15 17:42:25.914466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.961 [2024-07-15 17:42:26.033352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.961 [2024-07-15 17:42:26.033411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.961 [2024-07-15 17:42:26.033427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.961 [2024-07-15 17:42:26.033441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.961 [2024-07-15 17:42:26.033452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.961 [2024-07-15 17:42:26.033533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.961 [2024-07-15 17:42:26.033656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.961 [2024-07-15 17:42:26.033689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.961 [2024-07-15 17:42:26.033687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.901 [2024-07-15 17:42:26.848018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:31.901 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.902 17:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.902 Malloc1 00:19:31.902 [2024-07-15 17:42:26.933588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.902 Malloc2 00:19:31.902 Malloc3 00:19:32.160 Malloc4 00:19:32.160 Malloc5 00:19:32.160 Malloc6 00:19:32.160 Malloc7 00:19:32.160 Malloc8 00:19:32.419 Malloc9 00:19:32.419 Malloc10 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2275059 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2275059 /var/tmp/bdevperf.sock 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2275059 ']' 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.419 "hdgst": ${hdgst:-false}, 00:19:32.419 "ddgst": ${ddgst:-false} 00:19:32.419 }, 00:19:32.419 "method": "bdev_nvme_attach_controller" 00:19:32.419 } 00:19:32.419 EOF 00:19:32.419 )") 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.419 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.419 { 00:19:32.419 "params": { 00:19:32.419 "name": "Nvme$subsystem", 00:19:32.419 "trtype": "$TEST_TRANSPORT", 00:19:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.419 "adrfam": "ipv4", 00:19:32.419 "trsvcid": "$NVMF_PORT", 00:19:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.420 "hdgst": ${hdgst:-false}, 00:19:32.420 "ddgst": ${ddgst:-false} 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 } 00:19:32.420 EOF 00:19:32.420 )") 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:32.420 { 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme$subsystem", 00:19:32.420 "trtype": "$TEST_TRANSPORT", 00:19:32.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "$NVMF_PORT", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.420 "hdgst": ${hdgst:-false}, 00:19:32.420 "ddgst": ${ddgst:-false} 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 } 00:19:32.420 EOF 00:19:32.420 )") 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:32.420 17:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme1", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme2", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme3", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme4", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme5", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme6", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme7", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme8", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme9", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 },{ 00:19:32.420 "params": { 00:19:32.420 "name": "Nvme10", 00:19:32.420 "trtype": "tcp", 00:19:32.420 "traddr": "10.0.0.2", 00:19:32.420 "adrfam": "ipv4", 00:19:32.420 "trsvcid": "4420", 00:19:32.420 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:32.420 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:32.420 "hdgst": false, 00:19:32.420 "ddgst": false 00:19:32.420 }, 00:19:32.420 "method": "bdev_nvme_attach_controller" 00:19:32.420 }' 00:19:32.420 [2024-07-15 17:42:27.438773] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:32.420 [2024-07-15 17:42:27.438848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:32.420 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.420 [2024-07-15 17:42:27.503824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.678 [2024-07-15 17:42:27.613067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2275059 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:34.585 17:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:35.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2275059 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2274870 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.530 { 00:19:35.530 "params": { 00:19:35.530 "name": "Nvme$subsystem", 00:19:35.530 "trtype": "$TEST_TRANSPORT", 00:19:35.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.530 "adrfam": "ipv4", 00:19:35.530 "trsvcid": "$NVMF_PORT", 00:19:35.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.530 "hdgst": ${hdgst:-false}, 00:19:35.530 "ddgst": ${ddgst:-false} 00:19:35.530 }, 00:19:35.530 "method": "bdev_nvme_attach_controller" 00:19:35.530 } 00:19:35.530 EOF 00:19:35.530 )") 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.530 { 00:19:35.530 "params": { 00:19:35.530 "name": "Nvme$subsystem", 00:19:35.530 "trtype": "$TEST_TRANSPORT", 00:19:35.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.530 "adrfam": "ipv4", 00:19:35.530 "trsvcid": "$NVMF_PORT", 00:19:35.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.530 "hdgst": ${hdgst:-false}, 00:19:35.530 "ddgst": ${ddgst:-false} 00:19:35.530 }, 00:19:35.530 "method": "bdev_nvme_attach_controller" 00:19:35.530 } 00:19:35.530 EOF 00:19:35.530 )") 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.530 { 00:19:35.530 "params": { 00:19:35.530 "name": "Nvme$subsystem", 00:19:35.530 "trtype": "$TEST_TRANSPORT", 00:19:35.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.530 "adrfam": "ipv4", 00:19:35.530 "trsvcid": "$NVMF_PORT", 00:19:35.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.530 "hdgst": ${hdgst:-false}, 00:19:35.530 "ddgst": ${ddgst:-false} 00:19:35.530 }, 00:19:35.530 "method": "bdev_nvme_attach_controller" 00:19:35.530 } 00:19:35.530 EOF 00:19:35.530 )") 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.530 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.531 { 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme$subsystem", 00:19:35.531 "trtype": "$TEST_TRANSPORT", 00:19:35.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "$NVMF_PORT", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.531 "hdgst": ${hdgst:-false}, 00:19:35.531 "ddgst": ${ddgst:-false} 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 } 00:19:35.531 EOF 00:19:35.531 )") 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:35.531 17:42:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme1", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme2", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme3", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme4", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme5", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme6", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.531 "params": { 00:19:35.531 "name": "Nvme7", 00:19:35.531 "trtype": "tcp", 00:19:35.531 "traddr": "10.0.0.2", 00:19:35.531 "adrfam": "ipv4", 00:19:35.531 "trsvcid": "4420", 00:19:35.531 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:35.531 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:35.531 "hdgst": false, 00:19:35.531 "ddgst": false 00:19:35.531 }, 00:19:35.531 "method": "bdev_nvme_attach_controller" 00:19:35.531 },{ 00:19:35.532 "params": { 00:19:35.532 "name": "Nvme8", 00:19:35.532 "trtype": "tcp", 00:19:35.532 "traddr": "10.0.0.2", 00:19:35.532 "adrfam": "ipv4", 00:19:35.532 "trsvcid": "4420", 00:19:35.532 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:35.532 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:35.532 "hdgst": false, 00:19:35.532 "ddgst": false 00:19:35.532 }, 00:19:35.532 "method": "bdev_nvme_attach_controller" 00:19:35.532 },{ 00:19:35.532 "params": { 00:19:35.532 "name": "Nvme9", 00:19:35.532 "trtype": "tcp", 00:19:35.532 "traddr": "10.0.0.2", 00:19:35.532 "adrfam": "ipv4", 00:19:35.532 "trsvcid": "4420", 00:19:35.532 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:35.532 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:35.532 "hdgst": false, 00:19:35.532 "ddgst": false 00:19:35.532 }, 00:19:35.532 "method": "bdev_nvme_attach_controller" 00:19:35.532 },{ 00:19:35.532 "params": { 00:19:35.532 "name": "Nvme10", 00:19:35.532 "trtype": "tcp", 00:19:35.532 "traddr": "10.0.0.2", 00:19:35.532 "adrfam": "ipv4", 00:19:35.532 "trsvcid": "4420", 00:19:35.532 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:35.532 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:35.532 "hdgst": false, 00:19:35.532 "ddgst": false 00:19:35.532 }, 00:19:35.532 "method": "bdev_nvme_attach_controller" 00:19:35.532 }' 00:19:35.532 [2024-07-15 17:42:30.473236] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:35.532 [2024-07-15 17:42:30.473322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275473 ] 00:19:35.532 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.532 [2024-07-15 17:42:30.539590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.532 [2024-07-15 17:42:30.653250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.432 Running I/O for 1 seconds... 00:19:38.366 00:19:38.366 Latency(us) 00:19:38.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.366 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme1n1 : 1.17 217.96 13.62 0.00 0.00 290631.87 22524.97 256318.58 00:19:38.366 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme2n1 : 1.13 225.79 14.11 0.00 0.00 275728.88 21068.61 256318.58 00:19:38.366 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme3n1 : 1.10 232.89 14.56 0.00 0.00 262838.80 19320.98 254765.13 00:19:38.366 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme4n1 : 1.09 238.30 14.89 0.00 0.00 252246.20 19223.89 246997.90 00:19:38.366 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme5n1 : 1.19 214.88 13.43 0.00 0.00 276683.28 22427.88 293601.28 00:19:38.366 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme6n1 : 1.14 224.01 14.00 0.00 0.00 260143.22 21068.61 254765.13 00:19:38.366 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme7n1 : 1.15 278.68 17.42 0.00 0.00 205638.47 15728.64 253211.69 00:19:38.366 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme8n1 : 1.20 267.59 16.72 0.00 0.00 211581.00 19806.44 251658.24 00:19:38.366 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme9n1 : 1.19 215.97 13.50 0.00 0.00 257492.20 21165.70 251658.24 00:19:38.366 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.366 Verification LBA range: start 0x0 length 0x400 00:19:38.366 Nvme10n1 : 1.20 265.57 16.60 0.00 0.00 206328.41 13786.83 260978.92 00:19:38.366 =================================================================================================================== 00:19:38.366 Total : 2381.64 148.85 0.00 0.00 247002.90 13786.83 293601.28 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.626 rmmod nvme_tcp 00:19:38.626 rmmod nvme_fabrics 00:19:38.626 rmmod nvme_keyring 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2274870 ']' 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2274870 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2274870 ']' 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2274870 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2274870 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2274870' 00:19:38.626 killing process with pid 2274870 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2274870 00:19:38.626 17:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2274870 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.194 17:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.764 00:19:41.764 real 0m12.494s 00:19:41.764 user 0m37.389s 00:19:41.764 sys 0m3.228s 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:41.764 ************************************ 00:19:41.764 END TEST nvmf_shutdown_tc1 00:19:41.764 ************************************ 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:41.764 ************************************ 00:19:41.764 START TEST nvmf_shutdown_tc2 00:19:41.764 ************************************ 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:41.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:41.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:41.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.764 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:41.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:19:41.765 00:19:41.765 --- 10.0.0.2 ping statistics --- 00:19:41.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.765 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:19:41.765 00:19:41.765 --- 10.0.0.1 ping statistics --- 00:19:41.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.765 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2276242 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2276242 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2276242 ']' 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.765 [2024-07-15 17:42:36.538749] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:41.765 [2024-07-15 17:42:36.538833] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.765 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.765 [2024-07-15 17:42:36.613425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.765 [2024-07-15 17:42:36.731302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.765 [2024-07-15 17:42:36.731372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.765 [2024-07-15 17:42:36.731388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.765 [2024-07-15 17:42:36.731401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.765 [2024-07-15 17:42:36.731413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.765 [2024-07-15 17:42:36.731510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.765 [2024-07-15 17:42:36.731550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.765 [2024-07-15 17:42:36.731628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:41.765 [2024-07-15 17:42:36.731631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.765 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.765 [2024-07-15 17:42:36.893790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.025 17:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 Malloc1 00:19:42.025 [2024-07-15 17:42:36.982968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.025 Malloc2 00:19:42.025 Malloc3 00:19:42.025 Malloc4 00:19:42.025 Malloc5 00:19:42.285 Malloc6 00:19:42.285 Malloc7 00:19:42.285 Malloc8 00:19:42.285 Malloc9 00:19:42.285 Malloc10 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2276420 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2276420 /var/tmp/bdevperf.sock 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2276420 ']' 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.545 { 00:19:42.545 "params": { 00:19:42.545 "name": "Nvme$subsystem", 00:19:42.545 "trtype": "$TEST_TRANSPORT", 00:19:42.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.545 "adrfam": "ipv4", 00:19:42.545 "trsvcid": "$NVMF_PORT", 00:19:42.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.545 "hdgst": ${hdgst:-false}, 00:19:42.545 "ddgst": ${ddgst:-false} 00:19:42.545 }, 00:19:42.545 "method": "bdev_nvme_attach_controller" 00:19:42.545 } 00:19:42.545 EOF 00:19:42.545 )") 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.545 { 00:19:42.545 "params": { 00:19:42.545 "name": "Nvme$subsystem", 00:19:42.545 "trtype": "$TEST_TRANSPORT", 00:19:42.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.545 "adrfam": "ipv4", 00:19:42.545 "trsvcid": "$NVMF_PORT", 00:19:42.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.545 "hdgst": ${hdgst:-false}, 00:19:42.545 "ddgst": ${ddgst:-false} 00:19:42.545 }, 00:19:42.545 "method": "bdev_nvme_attach_controller" 00:19:42.545 } 00:19:42.545 EOF 00:19:42.545 )") 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.545 { 00:19:42.545 "params": { 00:19:42.545 "name": "Nvme$subsystem", 00:19:42.545 "trtype": "$TEST_TRANSPORT", 00:19:42.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.545 "adrfam": "ipv4", 00:19:42.545 "trsvcid": "$NVMF_PORT", 00:19:42.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.545 "hdgst": ${hdgst:-false}, 00:19:42.545 "ddgst": ${ddgst:-false} 00:19:42.545 }, 00:19:42.545 "method": "bdev_nvme_attach_controller" 00:19:42.545 } 00:19:42.545 EOF 00:19:42.545 )") 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.545 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.546 { 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme$subsystem", 00:19:42.546 "trtype": "$TEST_TRANSPORT", 00:19:42.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "$NVMF_PORT", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.546 "hdgst": ${hdgst:-false}, 00:19:42.546 "ddgst": ${ddgst:-false} 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 } 00:19:42.546 EOF 00:19:42.546 )") 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:42.546 17:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme1", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme2", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme3", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme4", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme5", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme6", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme7", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme8", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.546 "adrfam": "ipv4", 00:19:42.546 "trsvcid": "4420", 00:19:42.546 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:42.546 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:42.546 "hdgst": false, 00:19:42.546 "ddgst": false 00:19:42.546 }, 00:19:42.546 "method": "bdev_nvme_attach_controller" 00:19:42.546 },{ 00:19:42.546 "params": { 00:19:42.546 "name": "Nvme9", 00:19:42.546 "trtype": "tcp", 00:19:42.546 "traddr": "10.0.0.2", 00:19:42.547 "adrfam": "ipv4", 00:19:42.547 "trsvcid": "4420", 00:19:42.547 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:42.547 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:42.547 "hdgst": false, 00:19:42.547 "ddgst": false 00:19:42.547 }, 00:19:42.547 "method": "bdev_nvme_attach_controller" 00:19:42.547 },{ 00:19:42.547 "params": { 00:19:42.547 "name": "Nvme10", 00:19:42.547 "trtype": "tcp", 00:19:42.547 "traddr": "10.0.0.2", 00:19:42.547 "adrfam": "ipv4", 00:19:42.547 "trsvcid": "4420", 00:19:42.547 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:42.547 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:42.547 "hdgst": false, 00:19:42.547 "ddgst": false 00:19:42.547 }, 00:19:42.547 "method": "bdev_nvme_attach_controller" 00:19:42.547 }' 00:19:42.547 [2024-07-15 17:42:37.502084] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:42.547 [2024-07-15 17:42:37.502182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276420 ] 00:19:42.547 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.547 [2024-07-15 17:42:37.564454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.547 [2024-07-15 17:42:37.675764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.449 Running I/O for 10 seconds... 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:44.449 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:44.708 17:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.965 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2276420 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2276420 ']' 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2276420 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2276420 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2276420' 00:19:45.224 killing process with pid 2276420 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2276420 00:19:45.224 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2276420 00:19:45.224 Received shutdown signal, test time was about 0.954803 seconds 00:19:45.224 00:19:45.224 Latency(us) 00:19:45.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme1n1 : 0.95 269.84 16.86 0.00 0.00 233839.12 17087.91 265639.25 00:19:45.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme2n1 : 0.94 204.93 12.81 0.00 0.00 302539.16 22913.33 295154.73 00:19:45.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme3n1 : 0.95 270.70 16.92 0.00 0.00 224541.39 18738.44 254765.13 00:19:45.224 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme4n1 : 0.92 215.19 13.45 0.00 0.00 273425.27 5145.79 265639.25 00:19:45.224 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme5n1 : 0.91 210.85 13.18 0.00 0.00 275280.28 20971.52 239230.67 00:19:45.224 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme6n1 : 0.92 208.30 13.02 0.00 0.00 272903.84 21554.06 259425.47 00:19:45.224 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme7n1 : 0.90 213.69 13.36 0.00 0.00 259800.68 20097.71 245444.46 00:19:45.224 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme8n1 : 0.95 268.35 16.77 0.00 0.00 203575.75 20874.43 254765.13 00:19:45.224 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme9n1 : 0.93 206.44 12.90 0.00 0.00 258288.70 20097.71 268746.15 00:19:45.224 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.224 Verification LBA range: start 0x0 length 0x400 00:19:45.224 Nvme10n1 : 0.94 203.93 12.75 0.00 0.00 256192.66 24563.86 299815.06 00:19:45.224 =================================================================================================================== 00:19:45.224 Total : 2272.23 142.01 0.00 0.00 252870.38 5145.79 299815.06 00:19:45.483 17:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2276242 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.417 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.417 rmmod nvme_tcp 00:19:46.676 rmmod nvme_fabrics 00:19:46.676 rmmod nvme_keyring 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2276242 ']' 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2276242 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2276242 ']' 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2276242 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2276242 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2276242' 00:19:46.676 killing process with pid 2276242 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2276242 00:19:46.676 17:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2276242 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.246 17:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.152 00:19:49.152 real 0m7.913s 00:19:49.152 user 0m23.906s 00:19:49.152 sys 0m1.584s 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.152 ************************************ 00:19:49.152 END TEST nvmf_shutdown_tc2 00:19:49.152 ************************************ 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:49.152 ************************************ 00:19:49.152 START TEST nvmf_shutdown_tc3 00:19:49.152 ************************************ 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.152 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.410 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.410 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.411 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.411 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:19:49.411 00:19:49.411 --- 10.0.0.2 ping statistics --- 00:19:49.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.411 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:49.411 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:49.411 00:19:49.412 --- 10.0.0.1 ping statistics --- 00:19:49.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.412 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2277335 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2277335 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2277335 ']' 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.412 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.412 [2024-07-15 17:42:44.512382] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:49.412 [2024-07-15 17:42:44.512457] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.412 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.669 [2024-07-15 17:42:44.578572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.669 [2024-07-15 17:42:44.688252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.669 [2024-07-15 17:42:44.688306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.669 [2024-07-15 17:42:44.688319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.669 [2024-07-15 17:42:44.688331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.669 [2024-07-15 17:42:44.688341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.669 [2024-07-15 17:42:44.688423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.669 [2024-07-15 17:42:44.688488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.669 [2024-07-15 17:42:44.688555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.669 [2024-07-15 17:42:44.688557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.927 [2024-07-15 17:42:44.831623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.927 17:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.927 Malloc1 00:19:49.927 [2024-07-15 17:42:44.906691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.927 Malloc2 00:19:49.927 Malloc3 00:19:49.927 Malloc4 00:19:50.185 Malloc5 00:19:50.185 Malloc6 00:19:50.185 Malloc7 00:19:50.185 Malloc8 00:19:50.185 Malloc9 00:19:50.185 Malloc10 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2277515 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2277515 /var/tmp/bdevperf.sock 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2277515 ']' 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:50.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.444 { 00:19:50.444 "params": { 00:19:50.444 "name": "Nvme$subsystem", 00:19:50.444 "trtype": "$TEST_TRANSPORT", 00:19:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.444 "adrfam": "ipv4", 00:19:50.444 "trsvcid": "$NVMF_PORT", 00:19:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.444 "hdgst": ${hdgst:-false}, 00:19:50.444 "ddgst": ${ddgst:-false} 00:19:50.444 }, 00:19:50.444 "method": "bdev_nvme_attach_controller" 00:19:50.444 } 00:19:50.444 EOF 00:19:50.444 )") 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.444 { 00:19:50.444 "params": { 00:19:50.444 "name": "Nvme$subsystem", 00:19:50.444 "trtype": "$TEST_TRANSPORT", 00:19:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.444 "adrfam": "ipv4", 00:19:50.444 "trsvcid": "$NVMF_PORT", 00:19:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.444 "hdgst": ${hdgst:-false}, 00:19:50.444 "ddgst": ${ddgst:-false} 00:19:50.444 }, 00:19:50.444 "method": "bdev_nvme_attach_controller" 00:19:50.444 } 00:19:50.444 EOF 00:19:50.444 )") 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.444 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.444 { 00:19:50.444 "params": { 00:19:50.444 "name": "Nvme$subsystem", 00:19:50.444 "trtype": "$TEST_TRANSPORT", 00:19:50.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.444 "adrfam": "ipv4", 00:19:50.444 "trsvcid": "$NVMF_PORT", 00:19:50.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.445 { 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme$subsystem", 00:19:50.445 "trtype": "$TEST_TRANSPORT", 00:19:50.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "$NVMF_PORT", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.445 "hdgst": ${hdgst:-false}, 00:19:50.445 "ddgst": ${ddgst:-false} 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 } 00:19:50.445 EOF 00:19:50.445 )") 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:50.445 17:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme1", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme2", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme3", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme4", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme5", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme6", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme7", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.445 "adrfam": "ipv4", 00:19:50.445 "trsvcid": "4420", 00:19:50.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:50.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:50.445 "hdgst": false, 00:19:50.445 "ddgst": false 00:19:50.445 }, 00:19:50.445 "method": "bdev_nvme_attach_controller" 00:19:50.445 },{ 00:19:50.445 "params": { 00:19:50.445 "name": "Nvme8", 00:19:50.445 "trtype": "tcp", 00:19:50.445 "traddr": "10.0.0.2", 00:19:50.446 "adrfam": "ipv4", 00:19:50.446 "trsvcid": "4420", 00:19:50.446 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:50.446 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:50.446 "hdgst": false, 00:19:50.446 "ddgst": false 00:19:50.446 }, 00:19:50.446 "method": "bdev_nvme_attach_controller" 00:19:50.446 },{ 00:19:50.446 "params": { 00:19:50.446 "name": "Nvme9", 00:19:50.446 "trtype": "tcp", 00:19:50.446 "traddr": "10.0.0.2", 00:19:50.446 "adrfam": "ipv4", 00:19:50.446 "trsvcid": "4420", 00:19:50.446 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:50.446 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:50.446 "hdgst": false, 00:19:50.446 "ddgst": false 00:19:50.446 }, 00:19:50.446 "method": "bdev_nvme_attach_controller" 00:19:50.446 },{ 00:19:50.446 "params": { 00:19:50.446 "name": "Nvme10", 00:19:50.446 "trtype": "tcp", 00:19:50.446 "traddr": "10.0.0.2", 00:19:50.446 "adrfam": "ipv4", 00:19:50.446 "trsvcid": "4420", 00:19:50.446 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:50.446 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:50.446 "hdgst": false, 00:19:50.446 "ddgst": false 00:19:50.446 }, 00:19:50.446 "method": "bdev_nvme_attach_controller" 00:19:50.446 }' 00:19:50.446 [2024-07-15 17:42:45.410957] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:50.446 [2024-07-15 17:42:45.411037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277515 ] 00:19:50.446 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.446 [2024-07-15 17:42:45.473436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.705 [2024-07-15 17:42:45.583287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.081 Running I/O for 10 seconds... 00:19:52.081 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.081 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:52.081 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:52.081 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.081 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:52.340 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.600 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.859 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.859 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:52.859 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:52.859 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:53.132 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:53.132 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:53.132 17:42:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2277335 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2277335 ']' 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2277335 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2277335 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2277335' 00:19:53.132 killing process with pid 2277335 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2277335 00:19:53.132 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2277335 00:19:53.132 [2024-07-15 17:42:48.066120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca1a0 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.066246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca1a0 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.068993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.132 [2024-07-15 17:42:48.069227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.069513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeca640 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.071942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecaae0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.072997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.073036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.073051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.133 [2024-07-15 17:42:48.073064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.073804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecafa0 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.134 [2024-07-15 17:42:48.074975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.074987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.075395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb440 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.076987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.135 [2024-07-15 17:42:48.077084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.077175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb900 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.079992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.136 [2024-07-15 17:42:48.080103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc240 is same with the state(5) to be set 00:19:53.137 [2024-07-15 17:42:48.089603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.089960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.089984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.137 [2024-07-15 17:42:48.090899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.137 [2024-07-15 17:42:48.090915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.090929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.090944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.090958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.090973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.090987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.138 [2024-07-15 17:42:48.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.091628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:53.138 [2024-07-15 17:42:48.092168] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2bad090 was disconnected and freed. reset controller. 00:19:53.138 [2024-07-15 17:42:48.092299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x297a450 is same with the state(5) to be set 00:19:53.138 [2024-07-15 17:42:48.092464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450610 is same with the state(5) to be set 00:19:53.138 [2024-07-15 17:42:48.092631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a12bb0 is same with the state(5) to be set 00:19:53.138 [2024-07-15 17:42:48.092791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.092923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b18350 is same with the state(5) to be set 00:19:53.138 [2024-07-15 17:42:48.092970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.138 [2024-07-15 17:42:48.092990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.138 [2024-07-15 17:42:48.093004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2971280 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.093134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a12990 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.093302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ea600 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.093456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b1a240 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.093616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2970c60 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.093826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.139 [2024-07-15 17:42:48.093945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.093958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294e830 is same with the state(5) to be set 00:19:53.139 [2024-07-15 17:42:48.095352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.139 [2024-07-15 17:42:48.095870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.139 [2024-07-15 17:42:48.095894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.095908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.095923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.095937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.095952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.095965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.095980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.095993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.096963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.096988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.097003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.097019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.097032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.140 [2024-07-15 17:42:48.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.140 [2024-07-15 17:42:48.097061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:53.141 [2024-07-15 17:42:48.097482] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2ba9390 was disconnected and freed. reset controller. 00:19:53.141 [2024-07-15 17:42:48.097617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.097987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.141 [2024-07-15 17:42:48.098480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.141 [2024-07-15 17:42:48.098493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.098985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.098999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.099532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.099546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2babbe0 is same with the state(5) to be set 00:19:53.142 [2024-07-15 17:42:48.099618] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2babbe0 was disconnected and freed. reset controller. 00:19:53.142 [2024-07-15 17:42:48.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.100939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.100960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.100975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.100991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.101005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.101021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.142 [2024-07-15 17:42:48.101035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.142 [2024-07-15 17:42:48.101050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.101984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.143 [2024-07-15 17:42:48.102334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.143 [2024-07-15 17:42:48.102349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.144 [2024-07-15 17:42:48.102823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.144 [2024-07-15 17:42:48.102929] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29e2d70 was disconnected and freed. reset controller. 00:19:53.144 [2024-07-15 17:42:48.105593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:53.144 [2024-07-15 17:42:48.105632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:53.144 [2024-07-15 17:42:48.105663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a12bb0 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ea600 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x297a450 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450610 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b18350 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2971280 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a12990 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b1a240 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2970c60 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.105921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294e830 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.107802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.144 [2024-07-15 17:42:48.108801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:53.144 [2024-07-15 17:42:48.109056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.144 [2024-07-15 17:42:48.109089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ea600 with addr=10.0.0.2, port=4420 00:19:53.144 [2024-07-15 17:42:48.109108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ea600 is same with the state(5) to be set 00:19:53.144 [2024-07-15 17:42:48.109256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.144 [2024-07-15 17:42:48.109282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a12bb0 with addr=10.0.0.2, port=4420 00:19:53.144 [2024-07-15 17:42:48.109297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a12bb0 is same with the state(5) to be set 00:19:53.144 [2024-07-15 17:42:48.109436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.144 [2024-07-15 17:42:48.109462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x294e830 with addr=10.0.0.2, port=4420 00:19:53.144 [2024-07-15 17:42:48.109478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294e830 is same with the state(5) to be set 00:19:53.144 [2024-07-15 17:42:48.109815] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.109908] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.109978] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.110048] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.110115] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.110207] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:53.144 [2024-07-15 17:42:48.110553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.144 [2024-07-15 17:42:48.110580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2b18350 with addr=10.0.0.2, port=4420 00:19:53.144 [2024-07-15 17:42:48.110596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b18350 is same with the state(5) to be set 00:19:53.144 [2024-07-15 17:42:48.110615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ea600 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.110635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a12bb0 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.110654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294e830 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.110801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b18350 (9): Bad file descriptor 00:19:53.144 [2024-07-15 17:42:48.110828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:53.144 [2024-07-15 17:42:48.110842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:53.144 [2024-07-15 17:42:48.110858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:53.144 [2024-07-15 17:42:48.110895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:53.144 [2024-07-15 17:42:48.110911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:53.144 [2024-07-15 17:42:48.110924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:53.144 [2024-07-15 17:42:48.110941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.144 [2024-07-15 17:42:48.110954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.144 [2024-07-15 17:42:48.110966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.144 [2024-07-15 17:42:48.111034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.144 [2024-07-15 17:42:48.111055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.144 [2024-07-15 17:42:48.111066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.144 [2024-07-15 17:42:48.111078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:53.144 [2024-07-15 17:42:48.111089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:53.144 [2024-07-15 17:42:48.111102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:53.144 [2024-07-15 17:42:48.111153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.144 [2024-07-15 17:42:48.115784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.115825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.115858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.115881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.115900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.115914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.115930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.115943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.115959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.115973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.115988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.145 [2024-07-15 17:42:48.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.145 [2024-07-15 17:42:48.116849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.116868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.116894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.116910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.116926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.116940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.116955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.116984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.116997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.117736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.117750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a9d410 is same with the state(5) to be set 00:19:53.146 [2024-07-15 17:42:48.119053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.146 [2024-07-15 17:42:48.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.146 [2024-07-15 17:42:48.119389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.119985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.119999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.147 [2024-07-15 17:42:48.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.147 [2024-07-15 17:42:48.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.120985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.120998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.121012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a9e7e0 is same with the state(5) to be set 00:19:53.148 [2024-07-15 17:42:48.122267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.122992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.123007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.123021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.123036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.123050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.123065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.123078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.123107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.148 [2024-07-15 17:42:48.123123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.148 [2024-07-15 17:42:48.123136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.123976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.123992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.149 [2024-07-15 17:42:48.124166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.149 [2024-07-15 17:42:48.124179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.124193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2949b50 is same with the state(5) to be set 00:19:53.150 [2024-07-15 17:42:48.125437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.125971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.150 [2024-07-15 17:42:48.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.150 [2024-07-15 17:42:48.126661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.126973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.126987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.127351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.127364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ba69f0 is same with the state(5) to be set 00:19:53.151 [2024-07-15 17:42:48.128595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.128982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.128996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.151 [2024-07-15 17:42:48.129209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.151 [2024-07-15 17:42:48.129224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.129972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.129985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.152 [2024-07-15 17:42:48.130458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.152 [2024-07-15 17:42:48.130473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.130486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.130501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.130515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.130530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.130543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.130557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ba7ec0 is same with the state(5) to be set 00:19:53.153 [2024-07-15 17:42:48.131821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.131843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.131886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.131904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.131920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.131933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.131948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.131962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.131991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.153 [2024-07-15 17:42:48.132771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.153 [2024-07-15 17:42:48.132786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.132968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.132985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.154 [2024-07-15 17:42:48.133712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.154 [2024-07-15 17:42:48.133726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2baa860 is same with the state(5) to be set 00:19:53.154 [2024-07-15 17:42:48.135299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:53.154 [2024-07-15 17:42:48.135331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:53.154 [2024-07-15 17:42:48.135350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:53.154 [2024-07-15 17:42:48.135366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:53.154 [2024-07-15 17:42:48.135482] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.154 [2024-07-15 17:42:48.135508] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.154 [2024-07-15 17:42:48.135607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:53.154 task offset: 16384 on job bdev=Nvme10n1 fails 00:19:53.154 00:19:53.154 Latency(us) 00:19:53.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.154 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme1n1 ended in about 0.94 seconds with error 00:19:53.154 Verification LBA range: start 0x0 length 0x400 00:19:53.154 Nvme1n1 : 0.94 136.28 8.52 68.14 0.00 309707.79 11699.39 324670.20 00:19:53.154 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme2n1 ended in about 0.95 seconds with error 00:19:53.154 Verification LBA range: start 0x0 length 0x400 00:19:53.154 Nvme2n1 : 0.95 134.59 8.41 67.29 0.00 307491.78 23107.51 316902.97 00:19:53.154 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme3n1 ended in about 0.95 seconds with error 00:19:53.154 Verification LBA range: start 0x0 length 0x400 00:19:53.154 Nvme3n1 : 0.95 134.13 8.38 67.07 0.00 302407.68 21068.61 326223.64 00:19:53.154 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme4n1 ended in about 0.96 seconds with error 00:19:53.154 Verification LBA range: start 0x0 length 0x400 00:19:53.154 Nvme4n1 : 0.96 133.69 8.36 66.84 0.00 297354.94 23204.60 326223.64 00:19:53.154 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme5n1 ended in about 0.96 seconds with error 00:19:53.154 Verification LBA range: start 0x0 length 0x400 00:19:53.154 Nvme5n1 : 0.96 137.41 8.59 66.62 0.00 286337.58 21748.24 310689.19 00:19:53.154 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.154 Job: Nvme6n1 ended in about 0.96 seconds with error 00:19:53.155 Verification LBA range: start 0x0 length 0x400 00:19:53.155 Nvme6n1 : 0.96 132.81 8.30 66.40 0.00 287261.08 29321.29 323116.75 00:19:53.155 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.155 Job: Nvme7n1 ended in about 0.94 seconds with error 00:19:53.155 Verification LBA range: start 0x0 length 0x400 00:19:53.155 Nvme7n1 : 0.94 136.67 8.54 68.33 0.00 272219.65 28544.57 344865.00 00:19:53.155 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.155 Job: Nvme8n1 ended in about 0.97 seconds with error 00:19:53.155 Verification LBA range: start 0x0 length 0x400 00:19:53.155 Nvme8n1 : 0.97 132.37 8.27 66.19 0.00 276409.46 22233.69 295154.73 00:19:53.155 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.155 Job: Nvme9n1 ended in about 0.94 seconds with error 00:19:53.155 Verification LBA range: start 0x0 length 0x400 00:19:53.155 Nvme9n1 : 0.94 136.50 8.53 68.25 0.00 260746.05 15534.46 326223.64 00:19:53.155 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.155 Job: Nvme10n1 ended in about 0.93 seconds with error 00:19:53.155 Verification LBA range: start 0x0 length 0x400 00:19:53.155 Nvme10n1 : 0.93 137.20 8.58 68.60 0.00 253283.49 28932.93 343311.55 00:19:53.155 =================================================================================================================== 00:19:53.155 Total : 1351.66 84.48 673.75 0.00 285324.06 11699.39 344865.00 00:19:53.155 [2024-07-15 17:42:48.162085] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:53.155 [2024-07-15 17:42:48.162169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:53.155 [2024-07-15 17:42:48.162560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.162597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2b1a240 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.162619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b1a240 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.162764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.162790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x297a450 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.162806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x297a450 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.162943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.162969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2970c60 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.162985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2970c60 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.163152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.163176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2971280 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.163192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2971280 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.164923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.155 [2024-07-15 17:42:48.164953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:53.155 [2024-07-15 17:42:48.164970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:53.155 [2024-07-15 17:42:48.164987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:53.155 [2024-07-15 17:42:48.165180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.165209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2450610 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.165225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450610 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.165376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.165401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a12990 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.165416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a12990 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.165441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b1a240 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.165464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x297a450 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.165495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2970c60 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.165514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2971280 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.165570] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.155 [2024-07-15 17:42:48.165592] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.155 [2024-07-15 17:42:48.165614] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.155 [2024-07-15 17:42:48.165634] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.155 [2024-07-15 17:42:48.165846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.165874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x294e830 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.165898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294e830 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.166025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.166051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a12bb0 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.166066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a12bb0 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.166195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ea600 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.166235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ea600 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.166380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.155 [2024-07-15 17:42:48.166406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2b18350 with addr=10.0.0.2, port=4420 00:19:53.155 [2024-07-15 17:42:48.166422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b18350 is same with the state(5) to be set 00:19:53.155 [2024-07-15 17:42:48.166440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450610 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a12990 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.166755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.166767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.166778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.166793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294e830 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a12bb0 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ea600 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b18350 (9): Bad file descriptor 00:19:53.155 [2024-07-15 17:42:48.166860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.166925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.166937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:53.155 [2024-07-15 17:42:48.166977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.166995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.155 [2024-07-15 17:42:48.167006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.167018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.167030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.155 [2024-07-15 17:42:48.167046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.167060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.167072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:53.155 [2024-07-15 17:42:48.167087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:53.155 [2024-07-15 17:42:48.167100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:53.155 [2024-07-15 17:42:48.167112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:53.156 [2024-07-15 17:42:48.167126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:53.156 [2024-07-15 17:42:48.167139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:53.156 [2024-07-15 17:42:48.167151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:53.156 [2024-07-15 17:42:48.167193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.156 [2024-07-15 17:42:48.167211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.156 [2024-07-15 17:42:48.167222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.156 [2024-07-15 17:42:48.167233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.724 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:53.724 17:42:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2277515 00:19:54.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2277515) - No such process 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.669 rmmod nvme_tcp 00:19:54.669 rmmod nvme_fabrics 00:19:54.669 rmmod nvme_keyring 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.669 17:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.602 17:42:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.602 00:19:56.602 real 0m7.448s 00:19:56.602 user 0m18.120s 00:19:56.602 sys 0m1.373s 00:19:56.602 17:42:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.602 17:42:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.602 ************************************ 00:19:56.602 END TEST nvmf_shutdown_tc3 00:19:56.602 ************************************ 00:19:56.861 17:42:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:56.861 17:42:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:56.861 00:19:56.861 real 0m28.063s 00:19:56.861 user 1m19.499s 00:19:56.861 sys 0m6.324s 00:19:56.861 17:42:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.861 17:42:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 ************************************ 00:19:56.861 END TEST nvmf_shutdown 00:19:56.861 ************************************ 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.861 17:42:51 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 17:42:51 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 17:42:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:56.861 17:42:51 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.861 17:42:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 ************************************ 00:19:56.861 START TEST nvmf_multicontroller 00:19:56.861 ************************************ 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.861 * Looking for test storage... 00:19:56.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.861 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.862 17:42:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:58.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:58.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.777 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:58.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:58.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:58.778 00:19:58.778 --- 10.0.0.2 ping statistics --- 00:19:58.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.778 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:19:58.778 00:19:58.778 --- 10.0.0.1 ping statistics --- 00:19:58.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.778 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2279917 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2279917 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2279917 ']' 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.778 17:42:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.037 [2024-07-15 17:42:53.948331] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:19:59.037 [2024-07-15 17:42:53.948413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.037 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.038 [2024-07-15 17:42:54.012243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.038 [2024-07-15 17:42:54.134861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.038 [2024-07-15 17:42:54.134921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.038 [2024-07-15 17:42:54.134952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.038 [2024-07-15 17:42:54.134963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.038 [2024-07-15 17:42:54.134973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.038 [2024-07-15 17:42:54.135031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.038 [2024-07-15 17:42:54.135100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.038 [2024-07-15 17:42:54.135103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 [2024-07-15 17:42:54.935647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 Malloc0 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 [2024-07-15 17:42:54.995028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 [2024-07-15 17:42:55.002842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 Malloc1 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2280072 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2280072 /var/tmp/bdevperf.sock 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2280072 ']' 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.973 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 NVMe0n1 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.540 1 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 request: 00:20:00.540 { 00:20:00.540 "name": "NVMe0", 00:20:00.540 "trtype": "tcp", 00:20:00.540 "traddr": "10.0.0.2", 00:20:00.540 "adrfam": "ipv4", 00:20:00.540 "trsvcid": "4420", 00:20:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.540 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:00.540 "hostaddr": "10.0.0.2", 00:20:00.540 "hostsvcid": "60000", 00:20:00.540 "prchk_reftag": false, 00:20:00.540 "prchk_guard": false, 00:20:00.540 "hdgst": false, 00:20:00.540 "ddgst": false, 00:20:00.540 "method": "bdev_nvme_attach_controller", 00:20:00.540 "req_id": 1 00:20:00.540 } 00:20:00.540 Got JSON-RPC error response 00:20:00.540 response: 00:20:00.540 { 00:20:00.540 "code": -114, 00:20:00.540 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:00.540 } 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 request: 00:20:00.540 { 00:20:00.540 "name": "NVMe0", 00:20:00.540 "trtype": "tcp", 00:20:00.540 "traddr": "10.0.0.2", 00:20:00.540 "adrfam": "ipv4", 00:20:00.540 "trsvcid": "4420", 00:20:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.540 "hostaddr": "10.0.0.2", 00:20:00.540 "hostsvcid": "60000", 00:20:00.540 "prchk_reftag": false, 00:20:00.540 "prchk_guard": false, 00:20:00.540 "hdgst": false, 00:20:00.540 "ddgst": false, 00:20:00.540 "method": "bdev_nvme_attach_controller", 00:20:00.540 "req_id": 1 00:20:00.540 } 00:20:00.540 Got JSON-RPC error response 00:20:00.540 response: 00:20:00.540 { 00:20:00.540 "code": -114, 00:20:00.540 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:00.540 } 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 request: 00:20:00.540 { 00:20:00.540 "name": "NVMe0", 00:20:00.540 "trtype": "tcp", 00:20:00.540 "traddr": "10.0.0.2", 00:20:00.540 "adrfam": "ipv4", 00:20:00.540 "trsvcid": "4420", 00:20:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.540 "hostaddr": "10.0.0.2", 00:20:00.540 "hostsvcid": "60000", 00:20:00.540 "prchk_reftag": false, 00:20:00.540 "prchk_guard": false, 00:20:00.540 "hdgst": false, 00:20:00.540 "ddgst": false, 00:20:00.540 "multipath": "disable", 00:20:00.540 "method": "bdev_nvme_attach_controller", 00:20:00.540 "req_id": 1 00:20:00.540 } 00:20:00.540 Got JSON-RPC error response 00:20:00.540 response: 00:20:00.540 { 00:20:00.540 "code": -114, 00:20:00.540 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:00.540 } 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.540 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.540 request: 00:20:00.540 { 00:20:00.540 "name": "NVMe0", 00:20:00.540 "trtype": "tcp", 00:20:00.540 "traddr": "10.0.0.2", 00:20:00.540 "adrfam": "ipv4", 00:20:00.540 "trsvcid": "4420", 00:20:00.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.540 "hostaddr": "10.0.0.2", 00:20:00.540 "hostsvcid": "60000", 00:20:00.540 "prchk_reftag": false, 00:20:00.540 "prchk_guard": false, 00:20:00.540 "hdgst": false, 00:20:00.540 "ddgst": false, 00:20:00.540 "multipath": "failover", 00:20:00.540 "method": "bdev_nvme_attach_controller", 00:20:00.540 "req_id": 1 00:20:00.540 } 00:20:00.540 Got JSON-RPC error response 00:20:00.540 response: 00:20:00.540 { 00:20:00.540 "code": -114, 00:20:00.540 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:00.540 } 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.541 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.800 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.800 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:00.800 17:42:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.180 0 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2280072 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2280072 ']' 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2280072 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2280072 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2280072' 00:20:02.180 killing process with pid 2280072 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2280072 00:20:02.180 17:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2280072 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:02.180 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:02.180 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:02.180 [2024-07-15 17:42:55.102691] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:02.180 [2024-07-15 17:42:55.102778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280072 ] 00:20:02.180 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.180 [2024-07-15 17:42:55.165425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.180 [2024-07-15 17:42:55.275468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.180 [2024-07-15 17:42:55.772144] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 94fbdab3-1618-4fe8-8c66-eae0f198b0b7 already exists 00:20:02.180 [2024-07-15 17:42:55.772198] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:94fbdab3-1618-4fe8-8c66-eae0f198b0b7 alias for bdev NVMe1n1 00:20:02.180 [2024-07-15 17:42:55.772213] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:02.180 Running I/O for 1 seconds... 00:20:02.180 00:20:02.180 Latency(us) 00:20:02.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.180 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:02.180 NVMe0n1 : 1.00 19132.95 74.74 0.00 0.00 6680.31 2560.76 11553.75 00:20:02.181 =================================================================================================================== 00:20:02.181 Total : 19132.95 74.74 0.00 0.00 6680.31 2560.76 11553.75 00:20:02.181 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.181 00:20:02.181 Latency(us) 00:20:02.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.181 =================================================================================================================== 00:20:02.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.181 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.181 rmmod nvme_tcp 00:20:02.181 rmmod nvme_fabrics 00:20:02.181 rmmod nvme_keyring 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2279917 ']' 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2279917 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2279917 ']' 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2279917 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2279917 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2279917' 00:20:02.181 killing process with pid 2279917 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2279917 00:20:02.181 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2279917 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.753 17:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.657 17:42:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.657 00:20:04.657 real 0m7.838s 00:20:04.657 user 0m13.367s 00:20:04.657 sys 0m2.159s 00:20:04.657 17:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.657 17:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:04.657 ************************************ 00:20:04.657 END TEST nvmf_multicontroller 00:20:04.657 ************************************ 00:20:04.657 17:42:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:04.657 17:42:59 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.657 17:42:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:04.657 17:42:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.657 17:42:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.657 ************************************ 00:20:04.657 START TEST nvmf_aer 00:20:04.657 ************************************ 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:04.657 * Looking for test storage... 00:20:04.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.657 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.658 17:42:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:07.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:07.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:07.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:07.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:20:07.191 00:20:07.191 --- 10.0.0.2 ping statistics --- 00:20:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.191 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:20:07.191 00:20:07.191 --- 10.0.0.1 ping statistics --- 00:20:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.191 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.191 17:43:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2282330 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2282330 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2282330 ']' 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.191 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.191 [2024-07-15 17:43:02.061980] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:07.191 [2024-07-15 17:43:02.062066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.191 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.191 [2024-07-15 17:43:02.142307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.191 [2024-07-15 17:43:02.275617] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.192 [2024-07-15 17:43:02.275677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.192 [2024-07-15 17:43:02.275715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.192 [2024-07-15 17:43:02.275734] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.192 [2024-07-15 17:43:02.275752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.192 [2024-07-15 17:43:02.275886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.192 [2024-07-15 17:43:02.275955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.192 [2024-07-15 17:43:02.276031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.192 [2024-07-15 17:43:02.276020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 [2024-07-15 17:43:02.427505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 Malloc0 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 [2024-07-15 17:43:02.478654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 [ 00:20:07.450 { 00:20:07.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:07.450 "subtype": "Discovery", 00:20:07.450 "listen_addresses": [], 00:20:07.450 "allow_any_host": true, 00:20:07.450 "hosts": [] 00:20:07.450 }, 00:20:07.450 { 00:20:07.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.450 "subtype": "NVMe", 00:20:07.450 "listen_addresses": [ 00:20:07.450 { 00:20:07.450 "trtype": "TCP", 00:20:07.450 "adrfam": "IPv4", 00:20:07.450 "traddr": "10.0.0.2", 00:20:07.450 "trsvcid": "4420" 00:20:07.450 } 00:20:07.450 ], 00:20:07.450 "allow_any_host": true, 00:20:07.450 "hosts": [], 00:20:07.450 "serial_number": "SPDK00000000000001", 00:20:07.450 "model_number": "SPDK bdev Controller", 00:20:07.450 "max_namespaces": 2, 00:20:07.450 "min_cntlid": 1, 00:20:07.450 "max_cntlid": 65519, 00:20:07.450 "namespaces": [ 00:20:07.450 { 00:20:07.450 "nsid": 1, 00:20:07.450 "bdev_name": "Malloc0", 00:20:07.450 "name": "Malloc0", 00:20:07.450 "nguid": "606112A48BF44D688D766E2B2B2D5199", 00:20:07.450 "uuid": "606112a4-8bf4-4d68-8d76-6e2b2b2d5199" 00:20:07.450 } 00:20:07.450 ] 00:20:07.450 } 00:20:07.450 ] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2282422 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:07.450 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:07.451 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:07.451 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.709 Malloc1 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.709 [ 00:20:07.709 { 00:20:07.709 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:07.709 "subtype": "Discovery", 00:20:07.709 "listen_addresses": [], 00:20:07.709 "allow_any_host": true, 00:20:07.709 "hosts": [] 00:20:07.709 }, 00:20:07.709 { 00:20:07.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.709 "subtype": "NVMe", 00:20:07.709 "listen_addresses": [ 00:20:07.709 { 00:20:07.709 "trtype": "TCP", 00:20:07.709 "adrfam": "IPv4", 00:20:07.709 "traddr": "10.0.0.2", 00:20:07.709 "trsvcid": "4420" 00:20:07.709 } 00:20:07.709 ], 00:20:07.709 "allow_any_host": true, 00:20:07.709 "hosts": [], 00:20:07.709 "serial_number": "SPDK00000000000001", 00:20:07.709 "model_number": "SPDK bdev Controller", 00:20:07.709 "max_namespaces": 2, 00:20:07.709 "min_cntlid": 1, 00:20:07.709 "max_cntlid": 65519, 00:20:07.709 "namespaces": [ 00:20:07.709 { 00:20:07.709 "nsid": 1, 00:20:07.709 "bdev_name": "Malloc0", 00:20:07.709 "name": "Malloc0", 00:20:07.709 "nguid": "606112A48BF44D688D766E2B2B2D5199", 00:20:07.709 "uuid": "606112a4-8bf4-4d68-8d76-6e2b2b2d5199" 00:20:07.709 }, 00:20:07.709 { 00:20:07.709 "nsid": 2, 00:20:07.709 "bdev_name": "Malloc1", 00:20:07.709 "name": "Malloc1", 00:20:07.709 "nguid": "258BB7A120B14BE8A7C3083F8AB03E8B", 00:20:07.709 "uuid": "258bb7a1-20b1-4be8-a7c3-083f8ab03e8b" 00:20:07.709 } 00:20:07.709 ] 00:20:07.709 } 00:20:07.709 ] 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2282422 00:20:07.709 Asynchronous Event Request test 00:20:07.709 Attaching to 10.0.0.2 00:20:07.709 Attached to 10.0.0.2 00:20:07.709 Registering asynchronous event callbacks... 00:20:07.709 Starting namespace attribute notice tests for all controllers... 00:20:07.709 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:07.709 aer_cb - Changed Namespace 00:20:07.709 Cleaning up... 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.709 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.968 rmmod nvme_tcp 00:20:07.968 rmmod nvme_fabrics 00:20:07.968 rmmod nvme_keyring 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2282330 ']' 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2282330 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2282330 ']' 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2282330 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2282330 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2282330' 00:20:07.968 killing process with pid 2282330 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2282330 00:20:07.968 17:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2282330 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.247 17:43:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.156 17:43:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.156 00:20:10.156 real 0m5.537s 00:20:10.156 user 0m4.365s 00:20:10.156 sys 0m1.991s 00:20:10.156 17:43:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.156 17:43:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 ************************************ 00:20:10.156 END TEST nvmf_aer 00:20:10.156 ************************************ 00:20:10.156 17:43:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.156 17:43:05 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:10.156 17:43:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.156 17:43:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.156 17:43:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.414 ************************************ 00:20:10.414 START TEST nvmf_async_init 00:20:10.414 ************************************ 00:20:10.414 17:43:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:10.414 * Looking for test storage... 00:20:10.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:10.414 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.414 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:10.414 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=69a09c555df54dc9a39acd990e546429 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.415 17:43:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:12.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:12.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:12.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:12.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:20:12.381 00:20:12.381 --- 10.0.0.2 ping statistics --- 00:20:12.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.381 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:12.381 00:20:12.381 --- 10.0.0.1 ping statistics --- 00:20:12.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.381 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:12.381 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2284362 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2284362 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2284362 ']' 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.382 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.641 [2024-07-15 17:43:07.527479] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:12.641 [2024-07-15 17:43:07.527551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.641 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.641 [2024-07-15 17:43:07.589557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.641 [2024-07-15 17:43:07.703381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.641 [2024-07-15 17:43:07.703448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.641 [2024-07-15 17:43:07.703461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.641 [2024-07-15 17:43:07.703488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.641 [2024-07-15 17:43:07.703498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.641 [2024-07-15 17:43:07.703523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 [2024-07-15 17:43:07.850033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 null0 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 69a09c555df54dc9a39acd990e546429 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.901 [2024-07-15 17:43:07.890301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.901 17:43:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 nvme0n1 00:20:13.162 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.162 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.162 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.162 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 [ 00:20:13.162 { 00:20:13.162 "name": "nvme0n1", 00:20:13.162 "aliases": [ 00:20:13.162 "69a09c55-5df5-4dc9-a39a-cd990e546429" 00:20:13.162 ], 00:20:13.162 "product_name": "NVMe disk", 00:20:13.162 "block_size": 512, 00:20:13.162 "num_blocks": 2097152, 00:20:13.162 "uuid": "69a09c55-5df5-4dc9-a39a-cd990e546429", 00:20:13.162 "assigned_rate_limits": { 00:20:13.162 "rw_ios_per_sec": 0, 00:20:13.162 "rw_mbytes_per_sec": 0, 00:20:13.162 "r_mbytes_per_sec": 0, 00:20:13.162 "w_mbytes_per_sec": 0 00:20:13.162 }, 00:20:13.162 "claimed": false, 00:20:13.162 "zoned": false, 00:20:13.162 "supported_io_types": { 00:20:13.162 "read": true, 00:20:13.162 "write": true, 00:20:13.162 "unmap": false, 00:20:13.162 "flush": true, 00:20:13.162 "reset": true, 00:20:13.162 "nvme_admin": true, 00:20:13.162 "nvme_io": true, 00:20:13.162 "nvme_io_md": false, 00:20:13.162 "write_zeroes": true, 00:20:13.162 "zcopy": false, 00:20:13.162 "get_zone_info": false, 00:20:13.163 "zone_management": false, 00:20:13.163 "zone_append": false, 00:20:13.163 "compare": true, 00:20:13.163 "compare_and_write": true, 00:20:13.163 "abort": true, 00:20:13.163 "seek_hole": false, 00:20:13.163 "seek_data": false, 00:20:13.163 "copy": true, 00:20:13.163 "nvme_iov_md": false 00:20:13.163 }, 00:20:13.163 "memory_domains": [ 00:20:13.163 { 00:20:13.163 "dma_device_id": "system", 00:20:13.163 "dma_device_type": 1 00:20:13.163 } 00:20:13.163 ], 00:20:13.163 "driver_specific": { 00:20:13.163 "nvme": [ 00:20:13.163 { 00:20:13.163 "trid": { 00:20:13.163 "trtype": "TCP", 00:20:13.163 "adrfam": "IPv4", 00:20:13.163 "traddr": "10.0.0.2", 00:20:13.163 "trsvcid": "4420", 00:20:13.163 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.163 }, 00:20:13.163 "ctrlr_data": { 00:20:13.163 "cntlid": 1, 00:20:13.163 "vendor_id": "0x8086", 00:20:13.163 "model_number": "SPDK bdev Controller", 00:20:13.163 "serial_number": "00000000000000000000", 00:20:13.163 "firmware_revision": "24.09", 00:20:13.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.163 "oacs": { 00:20:13.163 "security": 0, 00:20:13.163 "format": 0, 00:20:13.163 "firmware": 0, 00:20:13.163 "ns_manage": 0 00:20:13.163 }, 00:20:13.163 "multi_ctrlr": true, 00:20:13.163 "ana_reporting": false 00:20:13.163 }, 00:20:13.163 "vs": { 00:20:13.163 "nvme_version": "1.3" 00:20:13.163 }, 00:20:13.163 "ns_data": { 00:20:13.163 "id": 1, 00:20:13.163 "can_share": true 00:20:13.163 } 00:20:13.163 } 00:20:13.163 ], 00:20:13.163 "mp_policy": "active_passive" 00:20:13.163 } 00:20:13.163 } 00:20:13.163 ] 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.163 [2024-07-15 17:43:08.143784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:13.163 [2024-07-15 17:43:08.143891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaf090 (9): Bad file descriptor 00:20:13.163 [2024-07-15 17:43:08.286047] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.163 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.163 [ 00:20:13.163 { 00:20:13.163 "name": "nvme0n1", 00:20:13.163 "aliases": [ 00:20:13.163 "69a09c55-5df5-4dc9-a39a-cd990e546429" 00:20:13.163 ], 00:20:13.163 "product_name": "NVMe disk", 00:20:13.163 "block_size": 512, 00:20:13.163 "num_blocks": 2097152, 00:20:13.163 "uuid": "69a09c55-5df5-4dc9-a39a-cd990e546429", 00:20:13.163 "assigned_rate_limits": { 00:20:13.163 "rw_ios_per_sec": 0, 00:20:13.163 "rw_mbytes_per_sec": 0, 00:20:13.163 "r_mbytes_per_sec": 0, 00:20:13.163 "w_mbytes_per_sec": 0 00:20:13.163 }, 00:20:13.163 "claimed": false, 00:20:13.422 "zoned": false, 00:20:13.422 "supported_io_types": { 00:20:13.422 "read": true, 00:20:13.422 "write": true, 00:20:13.422 "unmap": false, 00:20:13.422 "flush": true, 00:20:13.422 "reset": true, 00:20:13.422 "nvme_admin": true, 00:20:13.422 "nvme_io": true, 00:20:13.422 "nvme_io_md": false, 00:20:13.422 "write_zeroes": true, 00:20:13.422 "zcopy": false, 00:20:13.422 "get_zone_info": false, 00:20:13.422 "zone_management": false, 00:20:13.422 "zone_append": false, 00:20:13.422 "compare": true, 00:20:13.422 "compare_and_write": true, 00:20:13.422 "abort": true, 00:20:13.422 "seek_hole": false, 00:20:13.423 "seek_data": false, 00:20:13.423 "copy": true, 00:20:13.423 "nvme_iov_md": false 00:20:13.423 }, 00:20:13.423 "memory_domains": [ 00:20:13.423 { 00:20:13.423 "dma_device_id": "system", 00:20:13.423 "dma_device_type": 1 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "driver_specific": { 00:20:13.423 "nvme": [ 00:20:13.423 { 00:20:13.423 "trid": { 00:20:13.423 "trtype": "TCP", 00:20:13.423 "adrfam": "IPv4", 00:20:13.423 "traddr": "10.0.0.2", 00:20:13.423 "trsvcid": "4420", 00:20:13.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.423 }, 00:20:13.423 "ctrlr_data": { 00:20:13.423 "cntlid": 2, 00:20:13.423 "vendor_id": "0x8086", 00:20:13.423 "model_number": "SPDK bdev Controller", 00:20:13.423 "serial_number": "00000000000000000000", 00:20:13.423 "firmware_revision": "24.09", 00:20:13.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.423 "oacs": { 00:20:13.423 "security": 0, 00:20:13.423 "format": 0, 00:20:13.423 "firmware": 0, 00:20:13.423 "ns_manage": 0 00:20:13.423 }, 00:20:13.423 "multi_ctrlr": true, 00:20:13.423 "ana_reporting": false 00:20:13.423 }, 00:20:13.423 "vs": { 00:20:13.423 "nvme_version": "1.3" 00:20:13.423 }, 00:20:13.423 "ns_data": { 00:20:13.423 "id": 1, 00:20:13.423 "can_share": true 00:20:13.423 } 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "mp_policy": "active_passive" 00:20:13.423 } 00:20:13.423 } 00:20:13.423 ] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YfGZWJvyb1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YfGZWJvyb1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 [2024-07-15 17:43:08.336483] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.423 [2024-07-15 17:43:08.336670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YfGZWJvyb1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 [2024-07-15 17:43:08.344497] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YfGZWJvyb1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 [2024-07-15 17:43:08.352527] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.423 [2024-07-15 17:43:08.352600] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.423 nvme0n1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.423 [ 00:20:13.423 { 00:20:13.423 "name": "nvme0n1", 00:20:13.423 "aliases": [ 00:20:13.423 "69a09c55-5df5-4dc9-a39a-cd990e546429" 00:20:13.423 ], 00:20:13.423 "product_name": "NVMe disk", 00:20:13.423 "block_size": 512, 00:20:13.423 "num_blocks": 2097152, 00:20:13.423 "uuid": "69a09c55-5df5-4dc9-a39a-cd990e546429", 00:20:13.423 "assigned_rate_limits": { 00:20:13.423 "rw_ios_per_sec": 0, 00:20:13.423 "rw_mbytes_per_sec": 0, 00:20:13.423 "r_mbytes_per_sec": 0, 00:20:13.423 "w_mbytes_per_sec": 0 00:20:13.423 }, 00:20:13.423 "claimed": false, 00:20:13.423 "zoned": false, 00:20:13.423 "supported_io_types": { 00:20:13.423 "read": true, 00:20:13.423 "write": true, 00:20:13.423 "unmap": false, 00:20:13.423 "flush": true, 00:20:13.423 "reset": true, 00:20:13.423 "nvme_admin": true, 00:20:13.423 "nvme_io": true, 00:20:13.423 "nvme_io_md": false, 00:20:13.423 "write_zeroes": true, 00:20:13.423 "zcopy": false, 00:20:13.423 "get_zone_info": false, 00:20:13.423 "zone_management": false, 00:20:13.423 "zone_append": false, 00:20:13.423 "compare": true, 00:20:13.423 "compare_and_write": true, 00:20:13.423 "abort": true, 00:20:13.423 "seek_hole": false, 00:20:13.423 "seek_data": false, 00:20:13.423 "copy": true, 00:20:13.423 "nvme_iov_md": false 00:20:13.423 }, 00:20:13.423 "memory_domains": [ 00:20:13.423 { 00:20:13.423 "dma_device_id": "system", 00:20:13.423 "dma_device_type": 1 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "driver_specific": { 00:20:13.423 "nvme": [ 00:20:13.423 { 00:20:13.423 "trid": { 00:20:13.423 "trtype": "TCP", 00:20:13.423 "adrfam": "IPv4", 00:20:13.423 "traddr": "10.0.0.2", 00:20:13.423 "trsvcid": "4421", 00:20:13.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.423 }, 00:20:13.423 "ctrlr_data": { 00:20:13.423 "cntlid": 3, 00:20:13.423 "vendor_id": "0x8086", 00:20:13.423 "model_number": "SPDK bdev Controller", 00:20:13.423 "serial_number": "00000000000000000000", 00:20:13.423 "firmware_revision": "24.09", 00:20:13.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.423 "oacs": { 00:20:13.423 "security": 0, 00:20:13.423 "format": 0, 00:20:13.423 "firmware": 0, 00:20:13.423 "ns_manage": 0 00:20:13.423 }, 00:20:13.423 "multi_ctrlr": true, 00:20:13.423 "ana_reporting": false 00:20:13.423 }, 00:20:13.423 "vs": { 00:20:13.423 "nvme_version": "1.3" 00:20:13.423 }, 00:20:13.423 "ns_data": { 00:20:13.423 "id": 1, 00:20:13.423 "can_share": true 00:20:13.423 } 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "mp_policy": "active_passive" 00:20:13.423 } 00:20:13.423 } 00:20:13.423 ] 00:20:13.423 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.YfGZWJvyb1 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.424 rmmod nvme_tcp 00:20:13.424 rmmod nvme_fabrics 00:20:13.424 rmmod nvme_keyring 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2284362 ']' 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2284362 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2284362 ']' 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2284362 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2284362 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2284362' 00:20:13.424 killing process with pid 2284362 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2284362 00:20:13.424 [2024-07-15 17:43:08.544127] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:13.424 [2024-07-15 17:43:08.544163] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:13.424 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2284362 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.682 17:43:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.220 17:43:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.220 00:20:16.220 real 0m5.551s 00:20:16.220 user 0m2.108s 00:20:16.220 sys 0m1.824s 00:20:16.220 17:43:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.220 17:43:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:16.220 ************************************ 00:20:16.220 END TEST nvmf_async_init 00:20:16.220 ************************************ 00:20:16.220 17:43:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:16.220 17:43:10 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:16.220 17:43:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:16.220 17:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.220 17:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.220 ************************************ 00:20:16.220 START TEST dma 00:20:16.220 ************************************ 00:20:16.220 17:43:10 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:16.220 * Looking for test storage... 00:20:16.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:16.220 17:43:10 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.220 17:43:10 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.220 17:43:10 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.220 17:43:10 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.220 17:43:10 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.220 17:43:10 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.220 17:43:10 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.220 17:43:10 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:16.220 17:43:10 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.220 17:43:10 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.220 17:43:10 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:16.220 17:43:10 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:16.220 00:20:16.220 real 0m0.072s 00:20:16.220 user 0m0.032s 00:20:16.220 sys 0m0.046s 00:20:16.220 17:43:10 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.220 17:43:10 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:16.220 ************************************ 00:20:16.220 END TEST dma 00:20:16.220 ************************************ 00:20:16.220 17:43:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:16.220 17:43:10 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:16.221 17:43:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:16.221 17:43:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.221 17:43:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.221 ************************************ 00:20:16.221 START TEST nvmf_identify 00:20:16.221 ************************************ 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:16.221 * Looking for test storage... 00:20:16.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.221 17:43:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:18.124 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:18.124 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:18.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:18.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.124 17:43:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:20:18.124 00:20:18.124 --- 10.0.0.2 ping statistics --- 00:20:18.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.124 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:20:18.124 00:20:18.124 --- 10.0.0.1 ping statistics --- 00:20:18.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.124 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2286485 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2286485 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2286485 ']' 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.124 17:43:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:18.124 [2024-07-15 17:43:13.125007] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:18.124 [2024-07-15 17:43:13.125099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.124 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.124 [2024-07-15 17:43:13.198541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.382 [2024-07-15 17:43:13.319445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.382 [2024-07-15 17:43:13.319497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.382 [2024-07-15 17:43:13.319514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.382 [2024-07-15 17:43:13.319532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.382 [2024-07-15 17:43:13.319544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.382 [2024-07-15 17:43:13.319599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.382 [2024-07-15 17:43:13.319655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.382 [2024-07-15 17:43:13.319773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.382 [2024-07-15 17:43:13.319776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 [2024-07-15 17:43:14.117796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 Malloc0 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 [2024-07-15 17:43:14.195755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.322 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.322 [ 00:20:19.322 { 00:20:19.322 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.322 "subtype": "Discovery", 00:20:19.322 "listen_addresses": [ 00:20:19.322 { 00:20:19.322 "trtype": "TCP", 00:20:19.322 "adrfam": "IPv4", 00:20:19.322 "traddr": "10.0.0.2", 00:20:19.322 "trsvcid": "4420" 00:20:19.322 } 00:20:19.322 ], 00:20:19.322 "allow_any_host": true, 00:20:19.322 "hosts": [] 00:20:19.322 }, 00:20:19.323 { 00:20:19.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.323 "subtype": "NVMe", 00:20:19.323 "listen_addresses": [ 00:20:19.323 { 00:20:19.323 "trtype": "TCP", 00:20:19.323 "adrfam": "IPv4", 00:20:19.323 "traddr": "10.0.0.2", 00:20:19.323 "trsvcid": "4420" 00:20:19.323 } 00:20:19.323 ], 00:20:19.323 "allow_any_host": true, 00:20:19.323 "hosts": [], 00:20:19.323 "serial_number": "SPDK00000000000001", 00:20:19.323 "model_number": "SPDK bdev Controller", 00:20:19.323 "max_namespaces": 32, 00:20:19.323 "min_cntlid": 1, 00:20:19.323 "max_cntlid": 65519, 00:20:19.323 "namespaces": [ 00:20:19.323 { 00:20:19.323 "nsid": 1, 00:20:19.323 "bdev_name": "Malloc0", 00:20:19.323 "name": "Malloc0", 00:20:19.323 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:19.323 "eui64": "ABCDEF0123456789", 00:20:19.323 "uuid": "c08594b4-2cca-4e02-8e9a-5968c6d42de6" 00:20:19.323 } 00:20:19.323 ] 00:20:19.323 } 00:20:19.323 ] 00:20:19.323 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.323 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:19.323 [2024-07-15 17:43:14.238077] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:19.323 [2024-07-15 17:43:14.238122] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286640 ] 00:20:19.323 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.323 [2024-07-15 17:43:14.273071] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:19.323 [2024-07-15 17:43:14.273138] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:19.323 [2024-07-15 17:43:14.273148] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:19.323 [2024-07-15 17:43:14.273179] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:19.323 [2024-07-15 17:43:14.273191] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:19.323 [2024-07-15 17:43:14.273561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:19.323 [2024-07-15 17:43:14.273625] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15b1540 0 00:20:19.323 [2024-07-15 17:43:14.279892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:19.323 [2024-07-15 17:43:14.279930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:19.323 [2024-07-15 17:43:14.279940] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:19.323 [2024-07-15 17:43:14.279946] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:19.323 [2024-07-15 17:43:14.280006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.280022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.280031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.280051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:19.323 [2024-07-15 17:43:14.280077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.287894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.287913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.287921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.287929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.287946] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:19.323 [2024-07-15 17:43:14.287961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:19.323 [2024-07-15 17:43:14.287970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:19.323 [2024-07-15 17:43:14.287998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.288025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.288048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.288258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.288274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.288280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.288297] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:19.323 [2024-07-15 17:43:14.288311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:19.323 [2024-07-15 17:43:14.288326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.288366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.288388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.288539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.288556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.288562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.288578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:19.323 [2024-07-15 17:43:14.288594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.288608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.288633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.288654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.288791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.288807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.288814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.288831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.288849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.288867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.288889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.288914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.289074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.289090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.289097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.289113] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:19.323 [2024-07-15 17:43:14.289122] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.289136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.289250] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:19.323 [2024-07-15 17:43:14.289259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.289273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.289312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.289333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.289505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.289522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.289528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.289543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:19.323 [2024-07-15 17:43:14.289562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.323 [2024-07-15 17:43:14.289590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.323 [2024-07-15 17:43:14.289611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.323 [2024-07-15 17:43:14.289759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.323 [2024-07-15 17:43:14.289775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.323 [2024-07-15 17:43:14.289781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.323 [2024-07-15 17:43:14.289788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.323 [2024-07-15 17:43:14.289797] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:19.324 [2024-07-15 17:43:14.289805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.289825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:19.324 [2024-07-15 17:43:14.289848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.289867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.289882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.289894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.324 [2024-07-15 17:43:14.289916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.324 [2024-07-15 17:43:14.290098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.324 [2024-07-15 17:43:14.290118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.324 [2024-07-15 17:43:14.290130] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.290142] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b1540): datao=0, datal=4096, cccid=0 00:20:19.324 [2024-07-15 17:43:14.290154] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16113c0) on tqpair(0x15b1540): expected_datao=0, payload_size=4096 00:20:19.324 [2024-07-15 17:43:14.290167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.290192] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.290204] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.331118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.331126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.324 [2024-07-15 17:43:14.331147] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:19.324 [2024-07-15 17:43:14.331162] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:19.324 [2024-07-15 17:43:14.331171] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:19.324 [2024-07-15 17:43:14.331181] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:19.324 [2024-07-15 17:43:14.331190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:19.324 [2024-07-15 17:43:14.331198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.331215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.331232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.324 [2024-07-15 17:43:14.331283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.324 [2024-07-15 17:43:14.331457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.331477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.331485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.324 [2024-07-15 17:43:14.331511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.324 [2024-07-15 17:43:14.331546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.324 [2024-07-15 17:43:14.331593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.324 [2024-07-15 17:43:14.331623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.324 [2024-07-15 17:43:14.331667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.331688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:19.324 [2024-07-15 17:43:14.331703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.331710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.331720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.324 [2024-07-15 17:43:14.331742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16113c0, cid 0, qid 0 00:20:19.324 [2024-07-15 17:43:14.331767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611540, cid 1, qid 0 00:20:19.324 [2024-07-15 17:43:14.331776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16116c0, cid 2, qid 0 00:20:19.324 [2024-07-15 17:43:14.331784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.324 [2024-07-15 17:43:14.331792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16119c0, cid 4, qid 0 00:20:19.324 [2024-07-15 17:43:14.332042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.332059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.332066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16119c0) on tqpair=0x15b1540 00:20:19.324 [2024-07-15 17:43:14.332082] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:19.324 [2024-07-15 17:43:14.332092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:19.324 [2024-07-15 17:43:14.332117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.332141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.324 [2024-07-15 17:43:14.332178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16119c0, cid 4, qid 0 00:20:19.324 [2024-07-15 17:43:14.332415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.324 [2024-07-15 17:43:14.332432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.324 [2024-07-15 17:43:14.332439] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332445] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b1540): datao=0, datal=4096, cccid=4 00:20:19.324 [2024-07-15 17:43:14.332453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16119c0) on tqpair(0x15b1540): expected_datao=0, payload_size=4096 00:20:19.324 [2024-07-15 17:43:14.332461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332472] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332484] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.332573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.332580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16119c0) on tqpair=0x15b1540 00:20:19.324 [2024-07-15 17:43:14.332607] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:19.324 [2024-07-15 17:43:14.332651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.332674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.324 [2024-07-15 17:43:14.332701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.332715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b1540) 00:20:19.324 [2024-07-15 17:43:14.332724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.324 [2024-07-15 17:43:14.332764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16119c0, cid 4, qid 0 00:20:19.324 [2024-07-15 17:43:14.332776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611b40, cid 5, qid 0 00:20:19.324 [2024-07-15 17:43:14.336887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.324 [2024-07-15 17:43:14.336904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.324 [2024-07-15 17:43:14.336911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.336918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b1540): datao=0, datal=1024, cccid=4 00:20:19.324 [2024-07-15 17:43:14.336925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16119c0) on tqpair(0x15b1540): expected_datao=0, payload_size=1024 00:20:19.324 [2024-07-15 17:43:14.336933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.336942] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.336949] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.336958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.336971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.336978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.324 [2024-07-15 17:43:14.336984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611b40) on tqpair=0x15b1540 00:20:19.324 [2024-07-15 17:43:14.376017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.324 [2024-07-15 17:43:14.376036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.324 [2024-07-15 17:43:14.376044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16119c0) on tqpair=0x15b1540 00:20:19.325 [2024-07-15 17:43:14.376070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b1540) 00:20:19.325 [2024-07-15 17:43:14.376090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.325 [2024-07-15 17:43:14.376122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16119c0, cid 4, qid 0 00:20:19.325 [2024-07-15 17:43:14.376303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.325 [2024-07-15 17:43:14.376331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.325 [2024-07-15 17:43:14.376344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376353] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b1540): datao=0, datal=3072, cccid=4 00:20:19.325 [2024-07-15 17:43:14.376365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16119c0) on tqpair(0x15b1540): expected_datao=0, payload_size=3072 00:20:19.325 [2024-07-15 17:43:14.376376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376407] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376421] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.325 [2024-07-15 17:43:14.376584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.325 [2024-07-15 17:43:14.376594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16119c0) on tqpair=0x15b1540 00:20:19.325 [2024-07-15 17:43:14.376617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b1540) 00:20:19.325 [2024-07-15 17:43:14.376637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.325 [2024-07-15 17:43:14.376668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16119c0, cid 4, qid 0 00:20:19.325 [2024-07-15 17:43:14.376818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.325 [2024-07-15 17:43:14.376835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.325 [2024-07-15 17:43:14.376842] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b1540): datao=0, datal=8, cccid=4 00:20:19.325 [2024-07-15 17:43:14.376855] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16119c0) on tqpair(0x15b1540): expected_datao=0, payload_size=8 00:20:19.325 [2024-07-15 17:43:14.376863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376872] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.376888] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.417087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.325 [2024-07-15 17:43:14.417107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.325 [2024-07-15 17:43:14.417119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.325 [2024-07-15 17:43:14.417127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16119c0) on tqpair=0x15b1540 00:20:19.325 ===================================================== 00:20:19.325 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:19.325 ===================================================== 00:20:19.325 Controller Capabilities/Features 00:20:19.325 ================================ 00:20:19.325 Vendor ID: 0000 00:20:19.325 Subsystem Vendor ID: 0000 00:20:19.325 Serial Number: .................... 00:20:19.325 Model Number: ........................................ 00:20:19.325 Firmware Version: 24.09 00:20:19.325 Recommended Arb Burst: 0 00:20:19.325 IEEE OUI Identifier: 00 00 00 00:20:19.325 Multi-path I/O 00:20:19.325 May have multiple subsystem ports: No 00:20:19.325 May have multiple controllers: No 00:20:19.325 Associated with SR-IOV VF: No 00:20:19.325 Max Data Transfer Size: 131072 00:20:19.325 Max Number of Namespaces: 0 00:20:19.325 Max Number of I/O Queues: 1024 00:20:19.325 NVMe Specification Version (VS): 1.3 00:20:19.325 NVMe Specification Version (Identify): 1.3 00:20:19.325 Maximum Queue Entries: 128 00:20:19.325 Contiguous Queues Required: Yes 00:20:19.325 Arbitration Mechanisms Supported 00:20:19.325 Weighted Round Robin: Not Supported 00:20:19.325 Vendor Specific: Not Supported 00:20:19.325 Reset Timeout: 15000 ms 00:20:19.325 Doorbell Stride: 4 bytes 00:20:19.325 NVM Subsystem Reset: Not Supported 00:20:19.325 Command Sets Supported 00:20:19.325 NVM Command Set: Supported 00:20:19.325 Boot Partition: Not Supported 00:20:19.325 Memory Page Size Minimum: 4096 bytes 00:20:19.325 Memory Page Size Maximum: 4096 bytes 00:20:19.325 Persistent Memory Region: Not Supported 00:20:19.325 Optional Asynchronous Events Supported 00:20:19.325 Namespace Attribute Notices: Not Supported 00:20:19.325 Firmware Activation Notices: Not Supported 00:20:19.325 ANA Change Notices: Not Supported 00:20:19.325 PLE Aggregate Log Change Notices: Not Supported 00:20:19.325 LBA Status Info Alert Notices: Not Supported 00:20:19.325 EGE Aggregate Log Change Notices: Not Supported 00:20:19.325 Normal NVM Subsystem Shutdown event: Not Supported 00:20:19.325 Zone Descriptor Change Notices: Not Supported 00:20:19.325 Discovery Log Change Notices: Supported 00:20:19.325 Controller Attributes 00:20:19.325 128-bit Host Identifier: Not Supported 00:20:19.325 Non-Operational Permissive Mode: Not Supported 00:20:19.325 NVM Sets: Not Supported 00:20:19.325 Read Recovery Levels: Not Supported 00:20:19.325 Endurance Groups: Not Supported 00:20:19.325 Predictable Latency Mode: Not Supported 00:20:19.325 Traffic Based Keep ALive: Not Supported 00:20:19.325 Namespace Granularity: Not Supported 00:20:19.325 SQ Associations: Not Supported 00:20:19.325 UUID List: Not Supported 00:20:19.325 Multi-Domain Subsystem: Not Supported 00:20:19.325 Fixed Capacity Management: Not Supported 00:20:19.325 Variable Capacity Management: Not Supported 00:20:19.325 Delete Endurance Group: Not Supported 00:20:19.325 Delete NVM Set: Not Supported 00:20:19.325 Extended LBA Formats Supported: Not Supported 00:20:19.325 Flexible Data Placement Supported: Not Supported 00:20:19.325 00:20:19.325 Controller Memory Buffer Support 00:20:19.325 ================================ 00:20:19.325 Supported: No 00:20:19.325 00:20:19.325 Persistent Memory Region Support 00:20:19.325 ================================ 00:20:19.325 Supported: No 00:20:19.325 00:20:19.325 Admin Command Set Attributes 00:20:19.325 ============================ 00:20:19.325 Security Send/Receive: Not Supported 00:20:19.325 Format NVM: Not Supported 00:20:19.325 Firmware Activate/Download: Not Supported 00:20:19.325 Namespace Management: Not Supported 00:20:19.325 Device Self-Test: Not Supported 00:20:19.325 Directives: Not Supported 00:20:19.325 NVMe-MI: Not Supported 00:20:19.325 Virtualization Management: Not Supported 00:20:19.325 Doorbell Buffer Config: Not Supported 00:20:19.325 Get LBA Status Capability: Not Supported 00:20:19.325 Command & Feature Lockdown Capability: Not Supported 00:20:19.325 Abort Command Limit: 1 00:20:19.325 Async Event Request Limit: 4 00:20:19.325 Number of Firmware Slots: N/A 00:20:19.325 Firmware Slot 1 Read-Only: N/A 00:20:19.325 Firmware Activation Without Reset: N/A 00:20:19.325 Multiple Update Detection Support: N/A 00:20:19.325 Firmware Update Granularity: No Information Provided 00:20:19.325 Per-Namespace SMART Log: No 00:20:19.325 Asymmetric Namespace Access Log Page: Not Supported 00:20:19.325 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:19.325 Command Effects Log Page: Not Supported 00:20:19.325 Get Log Page Extended Data: Supported 00:20:19.325 Telemetry Log Pages: Not Supported 00:20:19.325 Persistent Event Log Pages: Not Supported 00:20:19.325 Supported Log Pages Log Page: May Support 00:20:19.325 Commands Supported & Effects Log Page: Not Supported 00:20:19.325 Feature Identifiers & Effects Log Page:May Support 00:20:19.325 NVMe-MI Commands & Effects Log Page: May Support 00:20:19.325 Data Area 4 for Telemetry Log: Not Supported 00:20:19.325 Error Log Page Entries Supported: 128 00:20:19.325 Keep Alive: Not Supported 00:20:19.325 00:20:19.325 NVM Command Set Attributes 00:20:19.325 ========================== 00:20:19.325 Submission Queue Entry Size 00:20:19.325 Max: 1 00:20:19.325 Min: 1 00:20:19.325 Completion Queue Entry Size 00:20:19.325 Max: 1 00:20:19.325 Min: 1 00:20:19.325 Number of Namespaces: 0 00:20:19.325 Compare Command: Not Supported 00:20:19.325 Write Uncorrectable Command: Not Supported 00:20:19.325 Dataset Management Command: Not Supported 00:20:19.325 Write Zeroes Command: Not Supported 00:20:19.325 Set Features Save Field: Not Supported 00:20:19.325 Reservations: Not Supported 00:20:19.325 Timestamp: Not Supported 00:20:19.325 Copy: Not Supported 00:20:19.325 Volatile Write Cache: Not Present 00:20:19.325 Atomic Write Unit (Normal): 1 00:20:19.325 Atomic Write Unit (PFail): 1 00:20:19.325 Atomic Compare & Write Unit: 1 00:20:19.325 Fused Compare & Write: Supported 00:20:19.325 Scatter-Gather List 00:20:19.325 SGL Command Set: Supported 00:20:19.325 SGL Keyed: Supported 00:20:19.325 SGL Bit Bucket Descriptor: Not Supported 00:20:19.325 SGL Metadata Pointer: Not Supported 00:20:19.325 Oversized SGL: Not Supported 00:20:19.325 SGL Metadata Address: Not Supported 00:20:19.325 SGL Offset: Supported 00:20:19.325 Transport SGL Data Block: Not Supported 00:20:19.325 Replay Protected Memory Block: Not Supported 00:20:19.325 00:20:19.325 Firmware Slot Information 00:20:19.325 ========================= 00:20:19.325 Active slot: 0 00:20:19.325 00:20:19.325 00:20:19.325 Error Log 00:20:19.326 ========= 00:20:19.326 00:20:19.326 Active Namespaces 00:20:19.326 ================= 00:20:19.326 Discovery Log Page 00:20:19.326 ================== 00:20:19.326 Generation Counter: 2 00:20:19.326 Number of Records: 2 00:20:19.326 Record Format: 0 00:20:19.326 00:20:19.326 Discovery Log Entry 0 00:20:19.326 ---------------------- 00:20:19.326 Transport Type: 3 (TCP) 00:20:19.326 Address Family: 1 (IPv4) 00:20:19.326 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:19.326 Entry Flags: 00:20:19.326 Duplicate Returned Information: 1 00:20:19.326 Explicit Persistent Connection Support for Discovery: 1 00:20:19.326 Transport Requirements: 00:20:19.326 Secure Channel: Not Required 00:20:19.326 Port ID: 0 (0x0000) 00:20:19.326 Controller ID: 65535 (0xffff) 00:20:19.326 Admin Max SQ Size: 128 00:20:19.326 Transport Service Identifier: 4420 00:20:19.326 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:19.326 Transport Address: 10.0.0.2 00:20:19.326 Discovery Log Entry 1 00:20:19.326 ---------------------- 00:20:19.326 Transport Type: 3 (TCP) 00:20:19.326 Address Family: 1 (IPv4) 00:20:19.326 Subsystem Type: 2 (NVM Subsystem) 00:20:19.326 Entry Flags: 00:20:19.326 Duplicate Returned Information: 0 00:20:19.326 Explicit Persistent Connection Support for Discovery: 0 00:20:19.326 Transport Requirements: 00:20:19.326 Secure Channel: Not Required 00:20:19.326 Port ID: 0 (0x0000) 00:20:19.326 Controller ID: 65535 (0xffff) 00:20:19.326 Admin Max SQ Size: 128 00:20:19.326 Transport Service Identifier: 4420 00:20:19.326 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:19.326 Transport Address: 10.0.0.2 [2024-07-15 17:43:14.417258] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:19.326 [2024-07-15 17:43:14.417283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16113c0) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.417298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.326 [2024-07-15 17:43:14.417308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611540) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.417315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.326 [2024-07-15 17:43:14.417324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16116c0) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.417331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.326 [2024-07-15 17:43:14.417339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.417347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.326 [2024-07-15 17:43:14.417379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.417388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.417394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.417405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.417430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.417700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.417718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.417726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.417733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.417746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.417754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.417761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.417771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.417815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.418023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.418040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.418047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.418063] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:19.326 [2024-07-15 17:43:14.418073] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:19.326 [2024-07-15 17:43:14.418091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.418123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.418146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.418282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.418298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.418305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.418331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.418360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.418381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.418512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.418527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.418534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.418560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.418587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.418610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.418782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.418798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.418805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.418831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.418848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.418859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.418900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.419103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.419119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.419126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.419151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.419179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.419205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.419342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.419358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.326 [2024-07-15 17:43:14.419365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.326 [2024-07-15 17:43:14.419392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.326 [2024-07-15 17:43:14.419409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.326 [2024-07-15 17:43:14.419419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.326 [2024-07-15 17:43:14.419444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.326 [2024-07-15 17:43:14.419615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.326 [2024-07-15 17:43:14.419631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.327 [2024-07-15 17:43:14.419638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.419644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.327 [2024-07-15 17:43:14.419662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.419673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.419679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.327 [2024-07-15 17:43:14.419690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.327 [2024-07-15 17:43:14.419725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.327 [2024-07-15 17:43:14.423906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.327 [2024-07-15 17:43:14.423923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.327 [2024-07-15 17:43:14.423930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.423937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.327 [2024-07-15 17:43:14.423956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.423967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.423973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b1540) 00:20:19.327 [2024-07-15 17:43:14.423984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.327 [2024-07-15 17:43:14.424006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611840, cid 3, qid 0 00:20:19.327 [2024-07-15 17:43:14.424209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.327 [2024-07-15 17:43:14.424225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.327 [2024-07-15 17:43:14.424232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.327 [2024-07-15 17:43:14.424238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611840) on tqpair=0x15b1540 00:20:19.327 [2024-07-15 17:43:14.424252] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:19.327 00:20:19.327 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:19.591 [2024-07-15 17:43:14.459684] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:19.591 [2024-07-15 17:43:14.459730] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286656 ] 00:20:19.591 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.591 [2024-07-15 17:43:14.494608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:19.591 [2024-07-15 17:43:14.494661] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:19.591 [2024-07-15 17:43:14.494671] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:19.591 [2024-07-15 17:43:14.494684] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:19.591 [2024-07-15 17:43:14.494704] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:19.591 [2024-07-15 17:43:14.494931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:19.591 [2024-07-15 17:43:14.494974] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9e8540 0 00:20:19.591 [2024-07-15 17:43:14.505894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:19.591 [2024-07-15 17:43:14.505920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:19.591 [2024-07-15 17:43:14.505927] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:19.591 [2024-07-15 17:43:14.505933] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:19.591 [2024-07-15 17:43:14.505986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.505998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.506005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.591 [2024-07-15 17:43:14.506019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:19.591 [2024-07-15 17:43:14.506045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.591 [2024-07-15 17:43:14.513935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.591 [2024-07-15 17:43:14.513953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.591 [2024-07-15 17:43:14.513960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.513967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.591 [2024-07-15 17:43:14.513999] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:19.591 [2024-07-15 17:43:14.514011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:19.591 [2024-07-15 17:43:14.514021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:19.591 [2024-07-15 17:43:14.514038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.591 [2024-07-15 17:43:14.514065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.591 [2024-07-15 17:43:14.514088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.591 [2024-07-15 17:43:14.514245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.591 [2024-07-15 17:43:14.514261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.591 [2024-07-15 17:43:14.514268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.591 [2024-07-15 17:43:14.514288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:19.591 [2024-07-15 17:43:14.514302] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:19.591 [2024-07-15 17:43:14.514314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.591 [2024-07-15 17:43:14.514338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.591 [2024-07-15 17:43:14.514360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.591 [2024-07-15 17:43:14.514491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.591 [2024-07-15 17:43:14.514506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.591 [2024-07-15 17:43:14.514512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.591 [2024-07-15 17:43:14.514527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:19.591 [2024-07-15 17:43:14.514541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:19.591 [2024-07-15 17:43:14.514553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.591 [2024-07-15 17:43:14.514578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.591 [2024-07-15 17:43:14.514598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.591 [2024-07-15 17:43:14.514725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.591 [2024-07-15 17:43:14.514740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.591 [2024-07-15 17:43:14.514746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.591 [2024-07-15 17:43:14.514761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:19.591 [2024-07-15 17:43:14.514778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.591 [2024-07-15 17:43:14.514794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.591 [2024-07-15 17:43:14.514804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.591 [2024-07-15 17:43:14.514825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.591 [2024-07-15 17:43:14.514981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.591 [2024-07-15 17:43:14.514996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.592 [2024-07-15 17:43:14.515003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.592 [2024-07-15 17:43:14.515017] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:19.592 [2024-07-15 17:43:14.515030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:19.592 [2024-07-15 17:43:14.515043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:19.592 [2024-07-15 17:43:14.515153] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:19.592 [2024-07-15 17:43:14.515160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:19.592 [2024-07-15 17:43:14.515172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515179] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.515210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.592 [2024-07-15 17:43:14.515232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.592 [2024-07-15 17:43:14.515376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.592 [2024-07-15 17:43:14.515392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.592 [2024-07-15 17:43:14.515399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.592 [2024-07-15 17:43:14.515414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:19.592 [2024-07-15 17:43:14.515430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.515456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.592 [2024-07-15 17:43:14.515477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.592 [2024-07-15 17:43:14.515606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.592 [2024-07-15 17:43:14.515621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.592 [2024-07-15 17:43:14.515627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.592 [2024-07-15 17:43:14.515642] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:19.592 [2024-07-15 17:43:14.515650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.515664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:19.592 [2024-07-15 17:43:14.515681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.515696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.515714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.592 [2024-07-15 17:43:14.515734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.592 [2024-07-15 17:43:14.515921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.592 [2024-07-15 17:43:14.515939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.592 [2024-07-15 17:43:14.515946] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515953] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=4096, cccid=0 00:20:19.592 [2024-07-15 17:43:14.515960] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa483c0) on tqpair(0x9e8540): expected_datao=0, payload_size=4096 00:20:19.592 [2024-07-15 17:43:14.515968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515978] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.515986] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.592 [2024-07-15 17:43:14.516020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.592 [2024-07-15 17:43:14.516027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.592 [2024-07-15 17:43:14.516044] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:19.592 [2024-07-15 17:43:14.516057] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:19.592 [2024-07-15 17:43:14.516065] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:19.592 [2024-07-15 17:43:14.516072] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:19.592 [2024-07-15 17:43:14.516080] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:19.592 [2024-07-15 17:43:14.516087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.516102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.516113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.592 [2024-07-15 17:43:14.516159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.592 [2024-07-15 17:43:14.516286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.592 [2024-07-15 17:43:14.516298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.592 [2024-07-15 17:43:14.516304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.592 [2024-07-15 17:43:14.516321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.592 [2024-07-15 17:43:14.516354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.592 [2024-07-15 17:43:14.516389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.592 [2024-07-15 17:43:14.516421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.592 [2024-07-15 17:43:14.516451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.516485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:19.592 [2024-07-15 17:43:14.516498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.592 [2024-07-15 17:43:14.516505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.592 [2024-07-15 17:43:14.516515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.592 [2024-07-15 17:43:14.516535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa483c0, cid 0, qid 0 00:20:19.592 [2024-07-15 17:43:14.516562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48540, cid 1, qid 0 00:20:19.592 [2024-07-15 17:43:14.516570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa486c0, cid 2, qid 0 00:20:19.592 [2024-07-15 17:43:14.516577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.592 [2024-07-15 17:43:14.516585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.592 [2024-07-15 17:43:14.516748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.593 [2024-07-15 17:43:14.516763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.593 [2024-07-15 17:43:14.516770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.516777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.593 [2024-07-15 17:43:14.516785] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:19.593 [2024-07-15 17:43:14.516794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.516808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.516819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.516830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.516837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.516857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.516868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:19.593 [2024-07-15 17:43:14.516898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.593 [2024-07-15 17:43:14.517046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.593 [2024-07-15 17:43:14.517065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.593 [2024-07-15 17:43:14.517073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.593 [2024-07-15 17:43:14.517144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.517163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.517178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.517196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.593 [2024-07-15 17:43:14.517231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.593 [2024-07-15 17:43:14.517385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.593 [2024-07-15 17:43:14.517398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.593 [2024-07-15 17:43:14.517404] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517411] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=4096, cccid=4 00:20:19.593 [2024-07-15 17:43:14.517419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa489c0) on tqpair(0x9e8540): expected_datao=0, payload_size=4096 00:20:19.593 [2024-07-15 17:43:14.517426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517450] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517459] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.593 [2024-07-15 17:43:14.517574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.593 [2024-07-15 17:43:14.517580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.593 [2024-07-15 17:43:14.517604] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:19.593 [2024-07-15 17:43:14.517622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.517640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.517659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.517677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.593 [2024-07-15 17:43:14.517698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.593 [2024-07-15 17:43:14.517849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.593 [2024-07-15 17:43:14.517861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.593 [2024-07-15 17:43:14.517868] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.517874] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=4096, cccid=4 00:20:19.593 [2024-07-15 17:43:14.521896] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa489c0) on tqpair(0x9e8540): expected_datao=0, payload_size=4096 00:20:19.593 [2024-07-15 17:43:14.521905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.521927] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.521936] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.521947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.593 [2024-07-15 17:43:14.521957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.593 [2024-07-15 17:43:14.521963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.521969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.593 [2024-07-15 17:43:14.521991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.522058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.593 [2024-07-15 17:43:14.522080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.593 [2024-07-15 17:43:14.522234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.593 [2024-07-15 17:43:14.522246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.593 [2024-07-15 17:43:14.522253] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522259] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=4096, cccid=4 00:20:19.593 [2024-07-15 17:43:14.522267] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa489c0) on tqpair(0x9e8540): expected_datao=0, payload_size=4096 00:20:19.593 [2024-07-15 17:43:14.522274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522284] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522291] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.593 [2024-07-15 17:43:14.522337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.593 [2024-07-15 17:43:14.522344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.593 [2024-07-15 17:43:14.522365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522433] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:19.593 [2024-07-15 17:43:14.522441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:19.593 [2024-07-15 17:43:14.522455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:19.593 [2024-07-15 17:43:14.522475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.522494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.593 [2024-07-15 17:43:14.522505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.593 [2024-07-15 17:43:14.522519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e8540) 00:20:19.593 [2024-07-15 17:43:14.522543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.593 [2024-07-15 17:43:14.522567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.593 [2024-07-15 17:43:14.522579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48b40, cid 5, qid 0 00:20:19.594 [2024-07-15 17:43:14.522752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.522764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.522771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.522778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.522788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.522797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.522803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.522809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48b40) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.522824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.522833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.522843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.522863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48b40, cid 5, qid 0 00:20:19.594 [2024-07-15 17:43:14.523016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.523030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.523036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48b40) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.523059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48b40, cid 5, qid 0 00:20:19.594 [2024-07-15 17:43:14.523242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.523254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.523261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48b40) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.523284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48b40, cid 5, qid 0 00:20:19.594 [2024-07-15 17:43:14.523462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.523477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.523483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48b40) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.523514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.523627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9e8540) 00:20:19.594 [2024-07-15 17:43:14.523636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.594 [2024-07-15 17:43:14.523657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48b40, cid 5, qid 0 00:20:19.594 [2024-07-15 17:43:14.523682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa489c0, cid 4, qid 0 00:20:19.594 [2024-07-15 17:43:14.523690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48cc0, cid 6, qid 0 00:20:19.594 [2024-07-15 17:43:14.523698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48e40, cid 7, qid 0 00:20:19.594 [2024-07-15 17:43:14.524043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.594 [2024-07-15 17:43:14.524059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.594 [2024-07-15 17:43:14.524066] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524072] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=8192, cccid=5 00:20:19.594 [2024-07-15 17:43:14.524080] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa48b40) on tqpair(0x9e8540): expected_datao=0, payload_size=8192 00:20:19.594 [2024-07-15 17:43:14.524087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524097] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524105] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.594 [2024-07-15 17:43:14.524122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.594 [2024-07-15 17:43:14.524129] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524135] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=512, cccid=4 00:20:19.594 [2024-07-15 17:43:14.524146] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa489c0) on tqpair(0x9e8540): expected_datao=0, payload_size=512 00:20:19.594 [2024-07-15 17:43:14.524154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524163] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524170] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.594 [2024-07-15 17:43:14.524187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.594 [2024-07-15 17:43:14.524193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524199] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=512, cccid=6 00:20:19.594 [2024-07-15 17:43:14.524207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa48cc0) on tqpair(0x9e8540): expected_datao=0, payload_size=512 00:20:19.594 [2024-07-15 17:43:14.524214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524223] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524230] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:19.594 [2024-07-15 17:43:14.524247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:19.594 [2024-07-15 17:43:14.524253] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524259] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e8540): datao=0, datal=4096, cccid=7 00:20:19.594 [2024-07-15 17:43:14.524266] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa48e40) on tqpair(0x9e8540): expected_datao=0, payload_size=4096 00:20:19.594 [2024-07-15 17:43:14.524274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524283] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.524310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.524317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48b40) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.524342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.524353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.524360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa489c0) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.524396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.524407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.524413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48cc0) on tqpair=0x9e8540 00:20:19.594 [2024-07-15 17:43:14.524430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.594 [2024-07-15 17:43:14.524439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.594 [2024-07-15 17:43:14.524461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.594 [2024-07-15 17:43:14.524467] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48e40) on tqpair=0x9e8540 00:20:19.594 ===================================================== 00:20:19.594 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.594 ===================================================== 00:20:19.594 Controller Capabilities/Features 00:20:19.594 ================================ 00:20:19.594 Vendor ID: 8086 00:20:19.594 Subsystem Vendor ID: 8086 00:20:19.594 Serial Number: SPDK00000000000001 00:20:19.594 Model Number: SPDK bdev Controller 00:20:19.594 Firmware Version: 24.09 00:20:19.594 Recommended Arb Burst: 6 00:20:19.595 IEEE OUI Identifier: e4 d2 5c 00:20:19.595 Multi-path I/O 00:20:19.595 May have multiple subsystem ports: Yes 00:20:19.595 May have multiple controllers: Yes 00:20:19.595 Associated with SR-IOV VF: No 00:20:19.595 Max Data Transfer Size: 131072 00:20:19.595 Max Number of Namespaces: 32 00:20:19.595 Max Number of I/O Queues: 127 00:20:19.595 NVMe Specification Version (VS): 1.3 00:20:19.595 NVMe Specification Version (Identify): 1.3 00:20:19.595 Maximum Queue Entries: 128 00:20:19.595 Contiguous Queues Required: Yes 00:20:19.595 Arbitration Mechanisms Supported 00:20:19.595 Weighted Round Robin: Not Supported 00:20:19.595 Vendor Specific: Not Supported 00:20:19.595 Reset Timeout: 15000 ms 00:20:19.595 Doorbell Stride: 4 bytes 00:20:19.595 NVM Subsystem Reset: Not Supported 00:20:19.595 Command Sets Supported 00:20:19.595 NVM Command Set: Supported 00:20:19.595 Boot Partition: Not Supported 00:20:19.595 Memory Page Size Minimum: 4096 bytes 00:20:19.595 Memory Page Size Maximum: 4096 bytes 00:20:19.595 Persistent Memory Region: Not Supported 00:20:19.595 Optional Asynchronous Events Supported 00:20:19.595 Namespace Attribute Notices: Supported 00:20:19.595 Firmware Activation Notices: Not Supported 00:20:19.595 ANA Change Notices: Not Supported 00:20:19.595 PLE Aggregate Log Change Notices: Not Supported 00:20:19.595 LBA Status Info Alert Notices: Not Supported 00:20:19.595 EGE Aggregate Log Change Notices: Not Supported 00:20:19.595 Normal NVM Subsystem Shutdown event: Not Supported 00:20:19.595 Zone Descriptor Change Notices: Not Supported 00:20:19.595 Discovery Log Change Notices: Not Supported 00:20:19.595 Controller Attributes 00:20:19.595 128-bit Host Identifier: Supported 00:20:19.595 Non-Operational Permissive Mode: Not Supported 00:20:19.595 NVM Sets: Not Supported 00:20:19.595 Read Recovery Levels: Not Supported 00:20:19.595 Endurance Groups: Not Supported 00:20:19.595 Predictable Latency Mode: Not Supported 00:20:19.595 Traffic Based Keep ALive: Not Supported 00:20:19.595 Namespace Granularity: Not Supported 00:20:19.595 SQ Associations: Not Supported 00:20:19.595 UUID List: Not Supported 00:20:19.595 Multi-Domain Subsystem: Not Supported 00:20:19.595 Fixed Capacity Management: Not Supported 00:20:19.595 Variable Capacity Management: Not Supported 00:20:19.595 Delete Endurance Group: Not Supported 00:20:19.595 Delete NVM Set: Not Supported 00:20:19.595 Extended LBA Formats Supported: Not Supported 00:20:19.595 Flexible Data Placement Supported: Not Supported 00:20:19.595 00:20:19.595 Controller Memory Buffer Support 00:20:19.595 ================================ 00:20:19.595 Supported: No 00:20:19.595 00:20:19.595 Persistent Memory Region Support 00:20:19.595 ================================ 00:20:19.595 Supported: No 00:20:19.595 00:20:19.595 Admin Command Set Attributes 00:20:19.595 ============================ 00:20:19.595 Security Send/Receive: Not Supported 00:20:19.595 Format NVM: Not Supported 00:20:19.595 Firmware Activate/Download: Not Supported 00:20:19.595 Namespace Management: Not Supported 00:20:19.595 Device Self-Test: Not Supported 00:20:19.595 Directives: Not Supported 00:20:19.595 NVMe-MI: Not Supported 00:20:19.595 Virtualization Management: Not Supported 00:20:19.595 Doorbell Buffer Config: Not Supported 00:20:19.595 Get LBA Status Capability: Not Supported 00:20:19.595 Command & Feature Lockdown Capability: Not Supported 00:20:19.595 Abort Command Limit: 4 00:20:19.595 Async Event Request Limit: 4 00:20:19.595 Number of Firmware Slots: N/A 00:20:19.595 Firmware Slot 1 Read-Only: N/A 00:20:19.595 Firmware Activation Without Reset: N/A 00:20:19.595 Multiple Update Detection Support: N/A 00:20:19.595 Firmware Update Granularity: No Information Provided 00:20:19.595 Per-Namespace SMART Log: No 00:20:19.595 Asymmetric Namespace Access Log Page: Not Supported 00:20:19.595 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:19.595 Command Effects Log Page: Supported 00:20:19.595 Get Log Page Extended Data: Supported 00:20:19.595 Telemetry Log Pages: Not Supported 00:20:19.595 Persistent Event Log Pages: Not Supported 00:20:19.595 Supported Log Pages Log Page: May Support 00:20:19.595 Commands Supported & Effects Log Page: Not Supported 00:20:19.595 Feature Identifiers & Effects Log Page:May Support 00:20:19.595 NVMe-MI Commands & Effects Log Page: May Support 00:20:19.595 Data Area 4 for Telemetry Log: Not Supported 00:20:19.595 Error Log Page Entries Supported: 128 00:20:19.595 Keep Alive: Supported 00:20:19.595 Keep Alive Granularity: 10000 ms 00:20:19.595 00:20:19.595 NVM Command Set Attributes 00:20:19.595 ========================== 00:20:19.595 Submission Queue Entry Size 00:20:19.595 Max: 64 00:20:19.595 Min: 64 00:20:19.595 Completion Queue Entry Size 00:20:19.595 Max: 16 00:20:19.595 Min: 16 00:20:19.595 Number of Namespaces: 32 00:20:19.595 Compare Command: Supported 00:20:19.595 Write Uncorrectable Command: Not Supported 00:20:19.595 Dataset Management Command: Supported 00:20:19.595 Write Zeroes Command: Supported 00:20:19.595 Set Features Save Field: Not Supported 00:20:19.595 Reservations: Supported 00:20:19.595 Timestamp: Not Supported 00:20:19.595 Copy: Supported 00:20:19.595 Volatile Write Cache: Present 00:20:19.595 Atomic Write Unit (Normal): 1 00:20:19.595 Atomic Write Unit (PFail): 1 00:20:19.595 Atomic Compare & Write Unit: 1 00:20:19.595 Fused Compare & Write: Supported 00:20:19.595 Scatter-Gather List 00:20:19.595 SGL Command Set: Supported 00:20:19.595 SGL Keyed: Supported 00:20:19.595 SGL Bit Bucket Descriptor: Not Supported 00:20:19.595 SGL Metadata Pointer: Not Supported 00:20:19.595 Oversized SGL: Not Supported 00:20:19.595 SGL Metadata Address: Not Supported 00:20:19.595 SGL Offset: Supported 00:20:19.595 Transport SGL Data Block: Not Supported 00:20:19.595 Replay Protected Memory Block: Not Supported 00:20:19.595 00:20:19.595 Firmware Slot Information 00:20:19.595 ========================= 00:20:19.595 Active slot: 1 00:20:19.595 Slot 1 Firmware Revision: 24.09 00:20:19.595 00:20:19.595 00:20:19.595 Commands Supported and Effects 00:20:19.595 ============================== 00:20:19.595 Admin Commands 00:20:19.595 -------------- 00:20:19.595 Get Log Page (02h): Supported 00:20:19.595 Identify (06h): Supported 00:20:19.595 Abort (08h): Supported 00:20:19.595 Set Features (09h): Supported 00:20:19.595 Get Features (0Ah): Supported 00:20:19.595 Asynchronous Event Request (0Ch): Supported 00:20:19.595 Keep Alive (18h): Supported 00:20:19.595 I/O Commands 00:20:19.595 ------------ 00:20:19.595 Flush (00h): Supported LBA-Change 00:20:19.595 Write (01h): Supported LBA-Change 00:20:19.595 Read (02h): Supported 00:20:19.595 Compare (05h): Supported 00:20:19.595 Write Zeroes (08h): Supported LBA-Change 00:20:19.595 Dataset Management (09h): Supported LBA-Change 00:20:19.595 Copy (19h): Supported LBA-Change 00:20:19.595 00:20:19.595 Error Log 00:20:19.595 ========= 00:20:19.595 00:20:19.595 Arbitration 00:20:19.595 =========== 00:20:19.595 Arbitration Burst: 1 00:20:19.595 00:20:19.595 Power Management 00:20:19.596 ================ 00:20:19.596 Number of Power States: 1 00:20:19.596 Current Power State: Power State #0 00:20:19.596 Power State #0: 00:20:19.596 Max Power: 0.00 W 00:20:19.596 Non-Operational State: Operational 00:20:19.596 Entry Latency: Not Reported 00:20:19.596 Exit Latency: Not Reported 00:20:19.596 Relative Read Throughput: 0 00:20:19.596 Relative Read Latency: 0 00:20:19.596 Relative Write Throughput: 0 00:20:19.596 Relative Write Latency: 0 00:20:19.596 Idle Power: Not Reported 00:20:19.596 Active Power: Not Reported 00:20:19.596 Non-Operational Permissive Mode: Not Supported 00:20:19.596 00:20:19.596 Health Information 00:20:19.596 ================== 00:20:19.596 Critical Warnings: 00:20:19.596 Available Spare Space: OK 00:20:19.596 Temperature: OK 00:20:19.596 Device Reliability: OK 00:20:19.596 Read Only: No 00:20:19.596 Volatile Memory Backup: OK 00:20:19.596 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:19.596 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:19.596 Available Spare: 0% 00:20:19.596 Available Spare Threshold: 0% 00:20:19.596 Life Percentage Used:[2024-07-15 17:43:14.524594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.524606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.524619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.524641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48e40, cid 7, qid 0 00:20:19.596 [2024-07-15 17:43:14.524814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.524830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.524837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.524843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48e40) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.524900] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:19.596 [2024-07-15 17:43:14.524928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa483c0) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.524938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.596 [2024-07-15 17:43:14.524947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48540) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.524955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.596 [2024-07-15 17:43:14.524963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa486c0) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.524971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.596 [2024-07-15 17:43:14.524979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.524987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.596 [2024-07-15 17:43:14.524999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.525024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.525046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.596 [2024-07-15 17:43:14.525192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.525207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.525214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.525232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.525256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.525282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.596 [2024-07-15 17:43:14.525426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.525441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.525448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.525462] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:19.596 [2024-07-15 17:43:14.525473] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:19.596 [2024-07-15 17:43:14.525490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.525515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.525536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.596 [2024-07-15 17:43:14.525691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.525704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.525711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.525733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.525749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.525759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.525779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.596 [2024-07-15 17:43:14.529889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.529917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.529924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.529931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.596 [2024-07-15 17:43:14.529963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.529974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:19.596 [2024-07-15 17:43:14.529980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e8540) 00:20:19.596 [2024-07-15 17:43:14.529991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.596 [2024-07-15 17:43:14.530013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa48840, cid 3, qid 0 00:20:19.596 [2024-07-15 17:43:14.530159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:19.596 [2024-07-15 17:43:14.530174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:19.596 [2024-07-15 17:43:14.530181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:19.597 [2024-07-15 17:43:14.530187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa48840) on tqpair=0x9e8540 00:20:19.597 [2024-07-15 17:43:14.530200] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:19.597 0% 00:20:19.597 Data Units Read: 0 00:20:19.597 Data Units Written: 0 00:20:19.597 Host Read Commands: 0 00:20:19.597 Host Write Commands: 0 00:20:19.597 Controller Busy Time: 0 minutes 00:20:19.597 Power Cycles: 0 00:20:19.597 Power On Hours: 0 hours 00:20:19.597 Unsafe Shutdowns: 0 00:20:19.597 Unrecoverable Media Errors: 0 00:20:19.597 Lifetime Error Log Entries: 0 00:20:19.597 Warning Temperature Time: 0 minutes 00:20:19.597 Critical Temperature Time: 0 minutes 00:20:19.597 00:20:19.597 Number of Queues 00:20:19.597 ================ 00:20:19.597 Number of I/O Submission Queues: 127 00:20:19.597 Number of I/O Completion Queues: 127 00:20:19.597 00:20:19.597 Active Namespaces 00:20:19.597 ================= 00:20:19.597 Namespace ID:1 00:20:19.597 Error Recovery Timeout: Unlimited 00:20:19.597 Command Set Identifier: NVM (00h) 00:20:19.597 Deallocate: Supported 00:20:19.597 Deallocated/Unwritten Error: Not Supported 00:20:19.597 Deallocated Read Value: Unknown 00:20:19.597 Deallocate in Write Zeroes: Not Supported 00:20:19.597 Deallocated Guard Field: 0xFFFF 00:20:19.597 Flush: Supported 00:20:19.597 Reservation: Supported 00:20:19.597 Namespace Sharing Capabilities: Multiple Controllers 00:20:19.597 Size (in LBAs): 131072 (0GiB) 00:20:19.597 Capacity (in LBAs): 131072 (0GiB) 00:20:19.597 Utilization (in LBAs): 131072 (0GiB) 00:20:19.597 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:19.597 EUI64: ABCDEF0123456789 00:20:19.597 UUID: c08594b4-2cca-4e02-8e9a-5968c6d42de6 00:20:19.597 Thin Provisioning: Not Supported 00:20:19.597 Per-NS Atomic Units: Yes 00:20:19.597 Atomic Boundary Size (Normal): 0 00:20:19.597 Atomic Boundary Size (PFail): 0 00:20:19.597 Atomic Boundary Offset: 0 00:20:19.597 Maximum Single Source Range Length: 65535 00:20:19.597 Maximum Copy Length: 65535 00:20:19.597 Maximum Source Range Count: 1 00:20:19.597 NGUID/EUI64 Never Reused: No 00:20:19.597 Namespace Write Protected: No 00:20:19.597 Number of LBA Formats: 1 00:20:19.597 Current LBA Format: LBA Format #00 00:20:19.597 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:19.597 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:19.597 rmmod nvme_tcp 00:20:19.597 rmmod nvme_fabrics 00:20:19.597 rmmod nvme_keyring 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2286485 ']' 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2286485 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2286485 ']' 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2286485 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2286485 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2286485' 00:20:19.597 killing process with pid 2286485 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2286485 00:20:19.597 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2286485 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.856 17:43:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.391 17:43:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.391 00:20:22.391 real 0m5.965s 00:20:22.391 user 0m7.237s 00:20:22.391 sys 0m1.789s 00:20:22.391 17:43:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.391 17:43:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 ************************************ 00:20:22.391 END TEST nvmf_identify 00:20:22.391 ************************************ 00:20:22.391 17:43:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:22.391 17:43:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:22.391 17:43:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:22.391 17:43:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.391 17:43:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.391 ************************************ 00:20:22.391 START TEST nvmf_perf 00:20:22.391 ************************************ 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:22.391 * Looking for test storage... 00:20:22.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.391 17:43:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.392 17:43:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:24.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:24.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:24.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:24.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:20:24.293 00:20:24.293 --- 10.0.0.2 ping statistics --- 00:20:24.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.293 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:20:24.293 00:20:24.293 --- 10.0.0.1 ping statistics --- 00:20:24.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.293 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.293 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2288690 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2288690 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2288690 ']' 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.294 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.294 [2024-07-15 17:43:19.280767] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:24.294 [2024-07-15 17:43:19.280859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.294 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.294 [2024-07-15 17:43:19.348257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.552 [2024-07-15 17:43:19.459008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.552 [2024-07-15 17:43:19.459064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.552 [2024-07-15 17:43:19.459092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.552 [2024-07-15 17:43:19.459104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.552 [2024-07-15 17:43:19.459113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.552 [2024-07-15 17:43:19.459182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.552 [2024-07-15 17:43:19.459209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.552 [2024-07-15 17:43:19.459272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.552 [2024-07-15 17:43:19.459275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:24.552 17:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:27.838 17:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:27.838 17:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:28.096 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:28.096 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:28.352 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:28.353 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:28.353 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:28.353 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:28.353 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.620 [2024-07-15 17:43:23.583982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.620 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.913 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.913 17:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.171 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:29.171 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:29.432 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.432 [2024-07-15 17:43:24.559513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.691 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:29.950 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:29.950 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:29.950 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:29.950 17:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:31.343 Initializing NVMe Controllers 00:20:31.343 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:31.343 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:31.343 Initialization complete. Launching workers. 00:20:31.343 ======================================================== 00:20:31.343 Latency(us) 00:20:31.343 Device Information : IOPS MiB/s Average min max 00:20:31.343 PCIE (0000:88:00.0) NSID 1 from core 0: 85096.63 332.41 375.33 40.24 5317.49 00:20:31.343 ======================================================== 00:20:31.343 Total : 85096.63 332.41 375.33 40.24 5317.49 00:20:31.343 00:20:31.343 17:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.343 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.279 Initializing NVMe Controllers 00:20:32.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.279 Initialization complete. Launching workers. 00:20:32.279 ======================================================== 00:20:32.279 Latency(us) 00:20:32.279 Device Information : IOPS MiB/s Average min max 00:20:32.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11091.85 212.25 45767.80 00:20:32.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18649.52 7945.88 47900.17 00:20:32.279 ======================================================== 00:20:32.279 Total : 148.00 0.58 13951.51 212.25 47900.17 00:20:32.279 00:20:32.537 17:43:27 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.915 Initializing NVMe Controllers 00:20:33.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:33.915 Initialization complete. Launching workers. 00:20:33.915 ======================================================== 00:20:33.915 Latency(us) 00:20:33.915 Device Information : IOPS MiB/s Average min max 00:20:33.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8299.69 32.42 3857.14 667.90 8199.03 00:20:33.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3811.38 14.89 8478.09 4495.04 47595.17 00:20:33.915 ======================================================== 00:20:33.915 Total : 12111.07 47.31 5311.36 667.90 47595.17 00:20:33.915 00:20:33.915 17:43:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:33.915 17:43:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:33.915 17:43:28 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:33.915 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.456 Initializing NVMe Controllers 00:20:36.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.456 Controller IO queue size 128, less than required. 00:20:36.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.456 Controller IO queue size 128, less than required. 00:20:36.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.456 Initialization complete. Launching workers. 00:20:36.456 ======================================================== 00:20:36.456 Latency(us) 00:20:36.456 Device Information : IOPS MiB/s Average min max 00:20:36.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1023.23 255.81 128899.42 76354.63 182599.45 00:20:36.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 566.52 141.63 235526.40 108823.05 372976.52 00:20:36.456 ======================================================== 00:20:36.456 Total : 1589.75 397.44 166896.79 76354.63 372976.52 00:20:36.456 00:20:36.456 17:43:31 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:36.456 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.716 No valid NVMe controllers or AIO or URING devices found 00:20:36.716 Initializing NVMe Controllers 00:20:36.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.716 Controller IO queue size 128, less than required. 00:20:36.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:36.716 Controller IO queue size 128, less than required. 00:20:36.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.716 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:36.716 WARNING: Some requested NVMe devices were skipped 00:20:36.716 17:43:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:36.716 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.249 Initializing NVMe Controllers 00:20:39.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.249 Controller IO queue size 128, less than required. 00:20:39.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.249 Controller IO queue size 128, less than required. 00:20:39.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:39.249 Initialization complete. Launching workers. 00:20:39.249 00:20:39.249 ==================== 00:20:39.249 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:39.249 TCP transport: 00:20:39.249 polls: 27749 00:20:39.249 idle_polls: 10788 00:20:39.249 sock_completions: 16961 00:20:39.249 nvme_completions: 4213 00:20:39.249 submitted_requests: 6328 00:20:39.249 queued_requests: 1 00:20:39.249 00:20:39.249 ==================== 00:20:39.249 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:39.249 TCP transport: 00:20:39.249 polls: 26597 00:20:39.249 idle_polls: 9709 00:20:39.249 sock_completions: 16888 00:20:39.249 nvme_completions: 4177 00:20:39.249 submitted_requests: 6310 00:20:39.249 queued_requests: 1 00:20:39.249 ======================================================== 00:20:39.249 Latency(us) 00:20:39.249 Device Information : IOPS MiB/s Average min max 00:20:39.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1053.00 263.25 124846.54 71690.91 167997.07 00:20:39.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1044.00 261.00 124038.65 47783.18 201057.85 00:20:39.249 ======================================================== 00:20:39.249 Total : 2097.00 524.25 124444.33 47783.18 201057.85 00:20:39.249 00:20:39.249 17:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:39.249 17:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.506 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.506 rmmod nvme_tcp 00:20:39.506 rmmod nvme_fabrics 00:20:39.506 rmmod nvme_keyring 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2288690 ']' 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2288690 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2288690 ']' 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2288690 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2288690 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2288690' 00:20:39.764 killing process with pid 2288690 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2288690 00:20:39.764 17:43:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2288690 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.665 17:43:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.570 17:43:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.570 00:20:43.570 real 0m21.326s 00:20:43.570 user 1m3.628s 00:20:43.570 sys 0m5.182s 00:20:43.570 17:43:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.570 17:43:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:43.570 ************************************ 00:20:43.570 END TEST nvmf_perf 00:20:43.570 ************************************ 00:20:43.570 17:43:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.570 17:43:38 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:43.570 17:43:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.570 17:43:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.570 17:43:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.570 ************************************ 00:20:43.570 START TEST nvmf_fio_host 00:20:43.570 ************************************ 00:20:43.570 17:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:43.570 * Looking for test storage... 00:20:43.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:43.570 17:43:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.570 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.570 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.571 17:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:45.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:45.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.474 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:45.475 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:45.475 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.475 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:20:45.734 00:20:45.734 --- 10.0.0.2 ping statistics --- 00:20:45.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.734 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:20:45.734 00:20:45.734 --- 10.0.0.1 ping statistics --- 00:20:45.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.734 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2292643 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2292643 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2292643 ']' 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.734 17:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.734 [2024-07-15 17:43:40.717319] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:45.734 [2024-07-15 17:43:40.717409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.734 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.734 [2024-07-15 17:43:40.783241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.012 [2024-07-15 17:43:40.894646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.012 [2024-07-15 17:43:40.894709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.012 [2024-07-15 17:43:40.894738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.012 [2024-07-15 17:43:40.894748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.012 [2024-07-15 17:43:40.894758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.012 [2024-07-15 17:43:40.894805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.012 [2024-07-15 17:43:40.894867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.012 [2024-07-15 17:43:40.894933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.012 [2024-07-15 17:43:40.894936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.012 17:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.012 17:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:46.012 17:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:46.305 [2024-07-15 17:43:41.263452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.305 17:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:46.305 17:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.305 17:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.305 17:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:46.563 Malloc1 00:20:46.563 17:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.820 17:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.078 17:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.336 [2024-07-15 17:43:42.309075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.336 17:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:47.594 17:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:47.853 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:47.853 fio-3.35 00:20:47.853 Starting 1 thread 00:20:47.853 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.383 00:20:50.383 test: (groupid=0, jobs=1): err= 0: pid=2293010: Mon Jul 15 17:43:45 2024 00:20:50.383 read: IOPS=7262, BW=28.4MiB/s (29.7MB/s)(57.0MiB/2008msec) 00:20:50.383 slat (usec): min=2, max=101, avg= 2.69, stdev= 1.55 00:20:50.383 clat (usec): min=4572, max=16088, avg=9625.56, stdev=938.40 00:20:50.383 lat (usec): min=4593, max=16090, avg=9628.26, stdev=938.34 00:20:50.383 clat percentiles (usec): 00:20:50.383 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:20:50.383 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:20:50.383 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:20:50.383 | 99.00th=[11863], 99.50th=[12125], 99.90th=[14877], 99.95th=[15139], 00:20:50.383 | 99.99th=[15401] 00:20:50.383 bw ( KiB/s): min=28096, max=29744, per=99.92%, avg=29026.00, stdev=683.91, samples=4 00:20:50.383 iops : min= 7024, max= 7436, avg=7256.50, stdev=170.98, samples=4 00:20:50.383 write: IOPS=7229, BW=28.2MiB/s (29.6MB/s)(56.7MiB/2008msec); 0 zone resets 00:20:50.383 slat (nsec): min=2228, max=99102, avg=2801.57, stdev=1365.04 00:20:50.383 clat (usec): min=2225, max=14844, avg=7923.40, stdev=810.23 00:20:50.383 lat (usec): min=2231, max=14846, avg=7926.21, stdev=810.22 00:20:50.383 clat percentiles (usec): 00:20:50.383 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7242], 00:20:50.383 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8094], 00:20:50.383 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:20:50.383 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[12780], 99.95th=[13435], 00:20:50.383 | 99.99th=[14746] 00:20:50.383 bw ( KiB/s): min=28688, max=29176, per=100.00%, avg=28920.00, stdev=207.28, samples=4 00:20:50.383 iops : min= 7172, max= 7294, avg=7230.00, stdev=51.82, samples=4 00:20:50.383 lat (msec) : 4=0.05%, 10=83.19%, 20=16.76% 00:20:50.383 cpu : usr=56.40%, sys=38.27%, ctx=72, majf=0, minf=41 00:20:50.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:50.383 issued rwts: total=14583,14517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:50.383 00:20:50.383 Run status group 0 (all jobs): 00:20:50.383 READ: bw=28.4MiB/s (29.7MB/s), 28.4MiB/s-28.4MiB/s (29.7MB/s-29.7MB/s), io=57.0MiB (59.7MB), run=2008-2008msec 00:20:50.383 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=56.7MiB (59.5MB), run=2008-2008msec 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:50.383 17:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:50.383 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:50.383 fio-3.35 00:20:50.383 Starting 1 thread 00:20:50.383 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.282 [2024-07-15 17:43:47.060754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab6a0 is same with the state(5) to be set 00:20:52.849 00:20:52.849 test: (groupid=0, jobs=1): err= 0: pid=2293346: Mon Jul 15 17:43:47 2024 00:20:52.849 read: IOPS=8241, BW=129MiB/s (135MB/s)(258MiB/2004msec) 00:20:52.850 slat (usec): min=2, max=110, avg= 3.85, stdev= 1.79 00:20:52.850 clat (usec): min=3293, max=18786, avg=9403.06, stdev=2237.07 00:20:52.850 lat (usec): min=3298, max=18790, avg=9406.90, stdev=2237.18 00:20:52.850 clat percentiles (usec): 00:20:52.850 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7570], 00:20:52.850 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:20:52.850 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12125], 95.00th=[13173], 00:20:52.850 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:20:52.850 | 99.99th=[17957] 00:20:52.850 bw ( KiB/s): min=59200, max=68864, per=49.60%, avg=65400.00, stdev=4283.41, samples=4 00:20:52.850 iops : min= 3700, max= 4304, avg=4087.50, stdev=267.71, samples=4 00:20:52.850 write: IOPS=4655, BW=72.7MiB/s (76.3MB/s)(134MiB/1843msec); 0 zone resets 00:20:52.850 slat (usec): min=30, max=224, avg=34.36, stdev= 6.51 00:20:52.850 clat (usec): min=2779, max=20412, avg=11077.26, stdev=1987.52 00:20:52.850 lat (usec): min=2815, max=20445, avg=11111.62, stdev=1988.50 00:20:52.850 clat percentiles (usec): 00:20:52.850 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9372], 00:20:52.850 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:20:52.850 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:20:52.850 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18744], 99.95th=[19006], 00:20:52.850 | 99.99th=[20317] 00:20:52.850 bw ( KiB/s): min=61728, max=71680, per=91.46%, avg=68128.00, stdev=4474.74, samples=4 00:20:52.850 iops : min= 3858, max= 4480, avg=4258.00, stdev=279.67, samples=4 00:20:52.850 lat (msec) : 4=0.10%, 10=53.12%, 20=46.77%, 50=0.01% 00:20:52.850 cpu : usr=75.49%, sys=21.37%, ctx=22, majf=0, minf=59 00:20:52.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:20:52.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.850 issued rwts: total=16516,8580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.850 00:20:52.850 Run status group 0 (all jobs): 00:20:52.850 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2004-2004msec 00:20:52.850 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=134MiB (141MB), run=1843-1843msec 00:20:52.850 17:43:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.108 17:43:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.108 rmmod nvme_tcp 00:20:53.108 rmmod nvme_fabrics 00:20:53.108 rmmod nvme_keyring 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2292643 ']' 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2292643 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2292643 ']' 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2292643 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2292643 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2292643' 00:20:53.108 killing process with pid 2292643 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2292643 00:20:53.108 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2292643 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.367 17:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.274 17:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.274 00:20:55.274 real 0m11.962s 00:20:55.274 user 0m34.078s 00:20:55.274 sys 0m4.357s 00:20:55.274 17:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.274 17:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.274 ************************************ 00:20:55.274 END TEST nvmf_fio_host 00:20:55.274 ************************************ 00:20:55.274 17:43:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:55.274 17:43:50 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:55.274 17:43:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:55.274 17:43:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.274 17:43:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.533 ************************************ 00:20:55.533 START TEST nvmf_failover 00:20:55.533 ************************************ 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:55.533 * Looking for test storage... 00:20:55.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.533 17:43:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:57.445 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:57.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:57.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:57.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:57.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:57.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:20:57.446 00:20:57.446 --- 10.0.0.2 ping statistics --- 00:20:57.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.446 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:20:57.446 00:20:57.446 --- 10.0.0.1 ping statistics --- 00:20:57.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.446 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2295535 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2295535 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2295535 ']' 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.446 17:43:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.446 [2024-07-15 17:43:52.468131] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:20:57.446 [2024-07-15 17:43:52.468215] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.447 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.447 [2024-07-15 17:43:52.538311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.705 [2024-07-15 17:43:52.649462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.705 [2024-07-15 17:43:52.649538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.705 [2024-07-15 17:43:52.649567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.705 [2024-07-15 17:43:52.649579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.705 [2024-07-15 17:43:52.649590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.705 [2024-07-15 17:43:52.649678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.705 [2024-07-15 17:43:52.649742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.705 [2024-07-15 17:43:52.649745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.640 [2024-07-15 17:43:53.660018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.640 17:43:53 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:58.898 Malloc0 00:20:58.898 17:43:53 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.155 17:43:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.413 17:43:54 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.671 [2024-07-15 17:43:54.741171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.671 17:43:54 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:59.930 [2024-07-15 17:43:55.030091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.930 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:00.496 [2024-07-15 17:43:55.327210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2295945 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2295945 /var/tmp/bdevperf.sock 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2295945 ']' 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.497 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.754 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.754 17:43:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:00.754 17:43:55 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.010 NVMe0n1 00:21:01.010 17:43:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.267 00:21:01.267 17:43:56 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2296079 00:21:01.267 17:43:56 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.267 17:43:56 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:02.676 17:43:57 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.676 17:43:57 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:05.962 17:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:06.220 00:21:06.220 17:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:06.220 [2024-07-15 17:44:01.354322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.354993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.220 [2024-07-15 17:44:01.355006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27640 is same with the state(5) to be set 00:21:06.479 17:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:09.766 17:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.766 [2024-07-15 17:44:04.612585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.766 17:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:10.706 17:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:10.965 [2024-07-15 17:44:05.895316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 [2024-07-15 17:44:05.895594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e70 is same with the state(5) to be set 00:21:10.965 17:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2296079 00:21:17.538 0 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2295945 ']' 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2295945' 00:21:17.538 killing process with pid 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2295945 00:21:17.538 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:17.538 [2024-07-15 17:43:55.391921] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:17.538 [2024-07-15 17:43:55.392001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295945 ] 00:21:17.538 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.538 [2024-07-15 17:43:55.451294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.538 [2024-07-15 17:43:55.563353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.538 Running I/O for 15 seconds... 00:21:17.538 [2024-07-15 17:43:57.616889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.616958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.616988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.538 [2024-07-15 17:43:57.617151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 17:43:57.617377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 17:43:57.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 17:43:57.617846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.617984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.617997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 17:43:57.618635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.539 [2024-07-15 17:43:57.618649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.618972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.618986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.540 [2024-07-15 17:43:57.619517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 17:43:57.619889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 17:43:57.619905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.619920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.619935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.619949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.619963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.619978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.619992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.541 [2024-07-15 17:43:57.620490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:43:57.620695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab390 is same with the state(5) to be set 00:21:17.541 [2024-07-15 17:43:57.620727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.541 [2024-07-15 17:43:57.620739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.541 [2024-07-15 17:43:57.620750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:21:17.541 [2024-07-15 17:43:57.620763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620823] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdab390 was disconnected and freed. reset controller. 00:21:17.541 [2024-07-15 17:43:57.620842] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:17.541 [2024-07-15 17:43:57.620883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.541 [2024-07-15 17:43:57.620902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.541 [2024-07-15 17:43:57.620930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.541 [2024-07-15 17:43:57.620956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.541 [2024-07-15 17:43:57.620982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:43:57.620995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.541 [2024-07-15 17:43:57.624244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.541 [2024-07-15 17:43:57.624281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd850f0 (9): Bad file descriptor 00:21:17.541 [2024-07-15 17:43:57.662705] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.541 [2024-07-15 17:44:01.356134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 17:44:01.356393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 17:44:01.356407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 17:44:01.356662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.356970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.542 [2024-07-15 17:44:01.357467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 17:44:01.357483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 17:44:01.357582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 17:44:01.357612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.357972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.543 [2024-07-15 17:44:01.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.543 [2024-07-15 17:44:01.358654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90640 len:8 PRP1 0x0 PRP2 0x0 00:21:17.543 [2024-07-15 17:44:01.358667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.543 [2024-07-15 17:44:01.358696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.543 [2024-07-15 17:44:01.358707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90648 len:8 PRP1 0x0 PRP2 0x0 00:21:17.543 [2024-07-15 17:44:01.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.543 [2024-07-15 17:44:01.358742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.543 [2024-07-15 17:44:01.358752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90656 len:8 PRP1 0x0 PRP2 0x0 00:21:17.543 [2024-07-15 17:44:01.358764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 17:44:01.358776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.543 [2024-07-15 17:44:01.358787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.358797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90664 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.358809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.358821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.358832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.358842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90672 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.358868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.358888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.358900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.358911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90680 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.358928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.358941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.358952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.358965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90688 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.358982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.358996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90696 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90704 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90712 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90720 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90728 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90736 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90744 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90752 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90760 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90768 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90776 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90784 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90792 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90800 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90808 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90816 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90824 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90832 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90840 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.359964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.359976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.359988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.359999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90848 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.360012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.360025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.360036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.360047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90856 len:8 PRP1 0x0 PRP2 0x0 00:21:17.544 [2024-07-15 17:44:01.360059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 17:44:01.360072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.544 [2024-07-15 17:44:01.360083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.544 [2024-07-15 17:44:01.360093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90864 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90872 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90880 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90888 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90896 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90904 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90912 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90920 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90928 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90936 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90944 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90952 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90960 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90968 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90976 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90984 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90992 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91000 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.360955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.360966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.360977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91008 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.360990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.361013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.361029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91016 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.361042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.545 [2024-07-15 17:44:01.361067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.545 [2024-07-15 17:44:01.361077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91024 len:8 PRP1 0x0 PRP2 0x0 00:21:17.545 [2024-07-15 17:44:01.361090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361154] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4fd80 was disconnected and freed. reset controller. 00:21:17.545 [2024-07-15 17:44:01.361173] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:17.545 [2024-07-15 17:44:01.361207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.545 [2024-07-15 17:44:01.361225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.545 [2024-07-15 17:44:01.361254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.545 [2024-07-15 17:44:01.361281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.545 [2024-07-15 17:44:01.361307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:01.361320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.545 [2024-07-15 17:44:01.364588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.545 [2024-07-15 17:44:01.364628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd850f0 (9): Bad file descriptor 00:21:17.545 [2024-07-15 17:44:01.401690] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.545 [2024-07-15 17:44:05.897516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.545 [2024-07-15 17:44:05.897570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 17:44:05.897600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.897983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.897998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.546 [2024-07-15 17:44:05.898651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 17:44:05.898665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.898977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.898991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.547 [2024-07-15 17:44:05.899640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 17:44:05.899807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.547 [2024-07-15 17:44:05.899852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:21:17.547 [2024-07-15 17:44:05.899864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.547 [2024-07-15 17:44:05.899978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 17:44:05.899993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.548 [2024-07-15 17:44:05.900007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.548 [2024-07-15 17:44:05.900034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.548 [2024-07-15 17:44:05.900060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd850f0 is same with the state(5) to be set 00:21:17.548 [2024-07-15 17:44:05.900249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21480 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21488 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.900958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.900970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.900981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.900992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21624 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 17:44:05.901367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.548 [2024-07-15 17:44:05.901377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.548 [2024-07-15 17:44:05.901388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21640 len:8 PRP1 0x0 PRP2 0x0 00:21:17.548 [2024-07-15 17:44:05.901400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21648 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21656 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20784 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20792 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20808 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20816 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20824 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20848 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20856 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.901954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.901966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.901977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.901987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20872 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20880 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20888 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20904 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21672 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.549 [2024-07-15 17:44:05.902353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.549 [2024-07-15 17:44:05.902367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21680 len:8 PRP1 0x0 PRP2 0x0 00:21:17.549 [2024-07-15 17:44:05.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 17:44:05.902392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21688 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21712 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21720 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.902688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21736 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.902700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.902713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.902726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21744 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20912 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20936 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20944 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20952 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.550 [2024-07-15 17:44:05.914819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.550 [2024-07-15 17:44:05.914830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20976 len:8 PRP1 0x0 PRP2 0x0 00:21:17.550 [2024-07-15 17:44:05.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.550 [2024-07-15 17:44:05.914854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.914889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.914901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.914914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.914927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.914938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.914949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.914973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.914984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.914995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21008 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21016 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21032 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21048 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:21:17.551 [2024-07-15 17:44:05.915621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.551 [2024-07-15 17:44:05.915633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.551 [2024-07-15 17:44:05.915643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.551 [2024-07-15 17:44:05.915653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.915960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.915972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.915984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.915995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.552 [2024-07-15 17:44:05.916564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:21:17.552 [2024-07-15 17:44:05.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.552 [2024-07-15 17:44:05.916588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.552 [2024-07-15 17:44:05.916599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.916956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.916968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.916983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.916994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21360 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21368 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21392 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21416 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.553 [2024-07-15 17:44:05.917495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.553 [2024-07-15 17:44:05.917505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21424 len:8 PRP1 0x0 PRP2 0x0 00:21:17.553 [2024-07-15 17:44:05.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.553 [2024-07-15 17:44:05.917530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21432 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21448 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21456 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20728 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20744 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20752 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20760 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.917957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.917970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.917981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.917992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.918004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.918017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.554 [2024-07-15 17:44:05.918028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.554 [2024-07-15 17:44:05.918039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:21:17.554 [2024-07-15 17:44:05.918051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.554 [2024-07-15 17:44:05.918113] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb4cc0 was disconnected and freed. reset controller. 00:21:17.554 [2024-07-15 17:44:05.918135] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:17.554 [2024-07-15 17:44:05.918151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.554 [2024-07-15 17:44:05.918218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd850f0 (9): Bad file descriptor 00:21:17.554 [2024-07-15 17:44:05.921515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.554 [2024-07-15 17:44:06.044812] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:17.554 00:21:17.554 Latency(us) 00:21:17.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.554 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:17.554 Verification LBA range: start 0x0 length 0x4000 00:21:17.554 NVMe0n1 : 15.01 8699.43 33.98 502.99 0.00 13882.20 782.79 26602.76 00:21:17.554 =================================================================================================================== 00:21:17.554 Total : 8699.43 33.98 502.99 0.00 13882.20 782.79 26602.76 00:21:17.554 Received shutdown signal, test time was about 15.000000 seconds 00:21:17.554 00:21:17.554 Latency(us) 00:21:17.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.554 =================================================================================================================== 00:21:17.554 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2297820 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2297820 /var/tmp/bdevperf.sock 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2297820 ']' 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.554 17:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:17.554 17:44:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.554 17:44:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:17.554 17:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:17.554 [2024-07-15 17:44:12.378499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.554 17:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:17.812 [2024-07-15 17:44:12.671370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:17.812 17:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.070 NVMe0n1 00:21:18.070 17:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.636 00:21:18.636 17:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.894 00:21:18.894 17:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.894 17:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:19.152 17:44:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.412 17:44:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:22.754 17:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.754 17:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:22.754 17:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2298484 00:21:22.754 17:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.754 17:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2298484 00:21:23.697 0 00:21:23.697 17:44:18 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:23.697 [2024-07-15 17:44:11.882755] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:23.697 [2024-07-15 17:44:11.882836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297820 ] 00:21:23.697 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.697 [2024-07-15 17:44:11.941402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.697 [2024-07-15 17:44:12.047559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.697 [2024-07-15 17:44:14.366057] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:23.697 [2024-07-15 17:44:14.366142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.697 [2024-07-15 17:44:14.366166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.697 [2024-07-15 17:44:14.366194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.697 [2024-07-15 17:44:14.366207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.697 [2024-07-15 17:44:14.366221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.697 [2024-07-15 17:44:14.366235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.697 [2024-07-15 17:44:14.366249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.698 [2024-07-15 17:44:14.366262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.698 [2024-07-15 17:44:14.366275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.698 [2024-07-15 17:44:14.366319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.698 [2024-07-15 17:44:14.366350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b480f0 (9): Bad file descriptor 00:21:23.698 [2024-07-15 17:44:14.378363] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.698 Running I/O for 1 seconds... 00:21:23.698 00:21:23.698 Latency(us) 00:21:23.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.698 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:23.698 Verification LBA range: start 0x0 length 0x4000 00:21:23.698 NVMe0n1 : 1.01 8430.20 32.93 0.00 0.00 15084.11 2645.71 13010.11 00:21:23.698 =================================================================================================================== 00:21:23.698 Total : 8430.20 32.93 0.00 0.00 15084.11 2645.71 13010.11 00:21:23.698 17:44:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.698 17:44:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:23.955 17:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:24.213 17:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.213 17:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:24.471 17:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:24.729 17:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:28.014 17:44:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.014 17:44:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2297820 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2297820 ']' 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2297820 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2297820 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2297820' 00:21:28.014 killing process with pid 2297820 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2297820 00:21:28.014 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2297820 00:21:28.273 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:28.273 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.531 rmmod nvme_tcp 00:21:28.531 rmmod nvme_fabrics 00:21:28.531 rmmod nvme_keyring 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2295535 ']' 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2295535 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2295535 ']' 00:21:28.531 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2295535 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2295535 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2295535' 00:21:28.789 killing process with pid 2295535 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2295535 00:21:28.789 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2295535 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.047 17:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.953 17:44:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.953 00:21:30.953 real 0m35.585s 00:21:30.953 user 2m1.313s 00:21:30.953 sys 0m7.440s 00:21:30.953 17:44:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.953 17:44:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 ************************************ 00:21:30.953 END TEST nvmf_failover 00:21:30.953 ************************************ 00:21:30.953 17:44:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:30.953 17:44:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:30.953 17:44:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:30.953 17:44:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.953 17:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 ************************************ 00:21:30.953 START TEST nvmf_host_discovery 00:21:30.953 ************************************ 00:21:30.953 17:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:31.213 * Looking for test storage... 00:21:31.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.213 17:44:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.214 17:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:33.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:33.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:33.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:33.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.120 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.121 17:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:21:33.121 00:21:33.121 --- 10.0.0.2 ping statistics --- 00:21:33.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.121 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:33.121 00:21:33.121 --- 10.0.0.1 ping statistics --- 00:21:33.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.121 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2301201 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2301201 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2301201 ']' 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.121 17:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.121 [2024-07-15 17:44:28.182307] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:33.121 [2024-07-15 17:44:28.182386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.121 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.121 [2024-07-15 17:44:28.250553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.380 [2024-07-15 17:44:28.365892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.380 [2024-07-15 17:44:28.365947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.380 [2024-07-15 17:44:28.365963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.380 [2024-07-15 17:44:28.365985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.380 [2024-07-15 17:44:28.365997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.380 [2024-07-15 17:44:28.366026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 [2024-07-15 17:44:29.185221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 [2024-07-15 17:44:29.193404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 null0 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 null1 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2301353 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2301353 /tmp/host.sock 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2301353 ']' 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:34.316 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.316 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 [2024-07-15 17:44:29.271273] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:34.316 [2024-07-15 17:44:29.271349] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301353 ] 00:21:34.316 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.316 [2024-07-15 17:44:29.329056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.317 [2024-07-15 17:44:29.436242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.574 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.575 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.833 [2024-07-15 17:44:29.867255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.833 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:34.834 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.092 17:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:35.092 17:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:35.661 [2024-07-15 17:44:30.625094] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:35.661 [2024-07-15 17:44:30.625137] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:35.661 [2024-07-15 17:44:30.625179] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:35.661 [2024-07-15 17:44:30.711444] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:35.920 [2024-07-15 17:44:30.936977] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:35.920 [2024-07-15 17:44:30.937002] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.920 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.178 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.178 [2024-07-15 17:44:31.311373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.178 [2024-07-15 17:44:31.312430] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:36.178 [2024-07-15 17:44:31.312483] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:36.436 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.437 [2024-07-15 17:44:31.439137] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:36.437 17:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:36.696 [2024-07-15 17:44:31.739606] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:36.696 [2024-07-15 17:44:31.739641] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:36.696 [2024-07-15 17:44:31.739651] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 [2024-07-15 17:44:32.536148] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:37.630 [2024-07-15 17:44:32.536205] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.630 [2024-07-15 17:44:32.537740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-15 17:44:32.537778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-15 17:44:32.537797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-15 17:44:32.537812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-15 17:44:32.537828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-15 17:44:32.537844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-15 17:44:32.537859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-15 17:44:32.537873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-15 17:44:32.537898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:37.630 [2024-07-15 17:44:32.547740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.630 [2024-07-15 17:44:32.557787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.558081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.558111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.558129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.558174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.558200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.630 [2024-07-15 17:44:32.558215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.630 [2024-07-15 17:44:32.558233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.630 [2024-07-15 17:44:32.558270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.630 [2024-07-15 17:44:32.567872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.568118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.568146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.568162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.568184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.568220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.630 [2024-07-15 17:44:32.568235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.630 [2024-07-15 17:44:32.568248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.630 [2024-07-15 17:44:32.568269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.630 [2024-07-15 17:44:32.577965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.578244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.578274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.578292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.578316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.578352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.630 [2024-07-15 17:44:32.578371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.630 [2024-07-15 17:44:32.578386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.630 [2024-07-15 17:44:32.578406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:37.630 [2024-07-15 17:44:32.588048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.588323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.588352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.588368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.588390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.588425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.630 [2024-07-15 17:44:32.588442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.630 [2024-07-15 17:44:32.588456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.630 [2024-07-15 17:44:32.588492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.630 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.630 [2024-07-15 17:44:32.598139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.598385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.598413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.598429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.598451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.598472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.630 [2024-07-15 17:44:32.598485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.630 [2024-07-15 17:44:32.598498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.630 [2024-07-15 17:44:32.598516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.630 [2024-07-15 17:44:32.608233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.630 [2024-07-15 17:44:32.608450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.630 [2024-07-15 17:44:32.608480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.630 [2024-07-15 17:44:32.608497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.630 [2024-07-15 17:44:32.608522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.630 [2024-07-15 17:44:32.608544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.631 [2024-07-15 17:44:32.608559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.631 [2024-07-15 17:44:32.608573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.631 [2024-07-15 17:44:32.608600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.631 [2024-07-15 17:44:32.618309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.631 [2024-07-15 17:44:32.618545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.631 [2024-07-15 17:44:32.618575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7c00 with addr=10.0.0.2, port=4420 00:21:37.631 [2024-07-15 17:44:32.618593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7c00 is same with the state(5) to be set 00:21:37.631 [2024-07-15 17:44:32.618617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7c00 (9): Bad file descriptor 00:21:37.631 [2024-07-15 17:44:32.618638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.631 [2024-07-15 17:44:32.618653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.631 [2024-07-15 17:44:32.618667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.631 [2024-07-15 17:44:32.618687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.631 [2024-07-15 17:44:32.624927] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:37.631 [2024-07-15 17:44:32.624958] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:37.631 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.888 17:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.823 [2024-07-15 17:44:33.872216] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:38.823 [2024-07-15 17:44:33.872257] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:38.823 [2024-07-15 17:44:33.872281] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.123 [2024-07-15 17:44:33.959557] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:39.123 [2024-07-15 17:44:34.026972] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:39.123 [2024-07-15 17:44:34.027007] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 request: 00:21:39.123 { 00:21:39.123 "name": "nvme", 00:21:39.123 "trtype": "tcp", 00:21:39.123 "traddr": "10.0.0.2", 00:21:39.123 "adrfam": "ipv4", 00:21:39.123 "trsvcid": "8009", 00:21:39.123 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:39.123 "wait_for_attach": true, 00:21:39.123 "method": "bdev_nvme_start_discovery", 00:21:39.123 "req_id": 1 00:21:39.123 } 00:21:39.123 Got JSON-RPC error response 00:21:39.123 response: 00:21:39.123 { 00:21:39.123 "code": -17, 00:21:39.123 "message": "File exists" 00:21:39.123 } 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 request: 00:21:39.123 { 00:21:39.123 "name": "nvme_second", 00:21:39.123 "trtype": "tcp", 00:21:39.123 "traddr": "10.0.0.2", 00:21:39.123 "adrfam": "ipv4", 00:21:39.123 "trsvcid": "8009", 00:21:39.123 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:39.123 "wait_for_attach": true, 00:21:39.123 "method": "bdev_nvme_start_discovery", 00:21:39.123 "req_id": 1 00:21:39.123 } 00:21:39.123 Got JSON-RPC error response 00:21:39.123 response: 00:21:39.123 { 00:21:39.123 "code": -17, 00:21:39.123 "message": "File exists" 00:21:39.123 } 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.123 17:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.498 [2024-07-15 17:44:35.231025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:40.498 [2024-07-15 17:44:35.231072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf2c90 with addr=10.0.0.2, port=8010 00:21:40.498 [2024-07-15 17:44:35.231095] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:40.498 [2024-07-15 17:44:35.231109] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:40.498 [2024-07-15 17:44:35.231122] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:41.435 [2024-07-15 17:44:36.233489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.435 [2024-07-15 17:44:36.233528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf2c90 with addr=10.0.0.2, port=8010 00:21:41.435 [2024-07-15 17:44:36.233550] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:41.435 [2024-07-15 17:44:36.233564] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:41.435 [2024-07-15 17:44:36.233577] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:42.372 [2024-07-15 17:44:37.235715] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:42.372 request: 00:21:42.372 { 00:21:42.372 "name": "nvme_second", 00:21:42.372 "trtype": "tcp", 00:21:42.372 "traddr": "10.0.0.2", 00:21:42.372 "adrfam": "ipv4", 00:21:42.372 "trsvcid": "8010", 00:21:42.372 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.372 "wait_for_attach": false, 00:21:42.372 "attach_timeout_ms": 3000, 00:21:42.372 "method": "bdev_nvme_start_discovery", 00:21:42.372 "req_id": 1 00:21:42.372 } 00:21:42.372 Got JSON-RPC error response 00:21:42.372 response: 00:21:42.372 { 00:21:42.372 "code": -110, 00:21:42.372 "message": "Connection timed out" 00:21:42.372 } 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2301353 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.372 rmmod nvme_tcp 00:21:42.372 rmmod nvme_fabrics 00:21:42.372 rmmod nvme_keyring 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2301201 ']' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2301201 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2301201 ']' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2301201 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2301201 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2301201' 00:21:42.372 killing process with pid 2301201 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2301201 00:21:42.372 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2301201 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.631 17:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.166 00:21:45.166 real 0m13.646s 00:21:45.166 user 0m19.744s 00:21:45.166 sys 0m2.755s 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.166 ************************************ 00:21:45.166 END TEST nvmf_host_discovery 00:21:45.166 ************************************ 00:21:45.166 17:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:45.166 17:44:39 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:45.166 17:44:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.166 17:44:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.166 17:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.166 ************************************ 00:21:45.166 START TEST nvmf_host_multipath_status 00:21:45.166 ************************************ 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:45.166 * Looking for test storage... 00:21:45.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.166 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.167 17:44:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:46.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:46.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:46.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:46.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.547 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.806 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:21:46.807 00:21:46.807 --- 10.0.0.2 ping statistics --- 00:21:46.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.807 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:46.807 00:21:46.807 --- 10.0.0.1 ping statistics --- 00:21:46.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.807 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2304380 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2304380 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2304380 ']' 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.807 17:44:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:46.807 [2024-07-15 17:44:41.838273] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:21:46.807 [2024-07-15 17:44:41.838357] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.807 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.807 [2024-07-15 17:44:41.906974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.067 [2024-07-15 17:44:42.023020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.067 [2024-07-15 17:44:42.023075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.067 [2024-07-15 17:44:42.023091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.067 [2024-07-15 17:44:42.023104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.067 [2024-07-15 17:44:42.023115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.067 [2024-07-15 17:44:42.023204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.067 [2024-07-15 17:44:42.023211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2304380 00:21:48.004 17:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:48.004 [2024-07-15 17:44:43.044949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.004 17:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:48.263 Malloc0 00:21:48.263 17:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:48.521 17:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.779 17:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.037 [2024-07-15 17:44:44.125219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.037 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:49.295 [2024-07-15 17:44:44.361894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2304677 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2304677 /var/tmp/bdevperf.sock 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2304677 ']' 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.295 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.862 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.862 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:49.862 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:49.862 17:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:50.431 Nvme0n1 00:21:50.431 17:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:50.999 Nvme0n1 00:21:50.999 17:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:50.999 17:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:52.904 17:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:52.904 17:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:53.166 17:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:53.426 17:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:54.367 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:54.367 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:54.367 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.367 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:54.630 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:54.630 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:54.630 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.630 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:54.932 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:54.932 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:54.932 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.932 17:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:55.193 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.193 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:55.193 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.193 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:55.451 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.451 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:55.451 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.451 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:55.709 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.709 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:55.709 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.709 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:55.967 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.967 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:55.967 17:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:56.225 17:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:56.484 17:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:57.420 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:57.420 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:57.420 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.420 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:57.678 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:57.678 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:57.678 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.678 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:57.936 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.936 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:57.936 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.936 17:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.195 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.195 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.195 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.195 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:58.453 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.453 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:58.453 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.453 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:58.711 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.711 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:58.711 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.711 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:58.968 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.968 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:58.968 17:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:59.225 17:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:59.485 17:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:00.422 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:00.422 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:00.422 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.422 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:00.681 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.681 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:00.681 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.681 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:00.939 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:00.939 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:00.939 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.939 17:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.197 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.197 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.197 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.197 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:01.454 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.454 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:01.454 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.454 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:01.711 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.711 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:01.711 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.711 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:01.968 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.968 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:01.968 17:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:02.225 17:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:02.514 17:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:03.452 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:03.452 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:03.452 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.452 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:03.709 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.709 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:03.709 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.709 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:03.966 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:03.966 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:03.966 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.966 17:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.223 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.223 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.223 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.223 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:04.480 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.481 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:04.481 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.481 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:04.739 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.739 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:04.739 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.739 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:04.997 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.997 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:04.997 17:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:05.255 17:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:05.514 17:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:06.448 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:06.448 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:06.448 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.448 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:06.707 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.707 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:06.707 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.707 17:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.965 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.965 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.965 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.965 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:07.224 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.224 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:07.224 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.224 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:07.482 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.482 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:07.482 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.482 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:07.740 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.740 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:07.740 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.740 17:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.007 17:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.007 17:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:08.007 17:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:08.291 17:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:08.550 17:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:09.485 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:09.485 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:09.743 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.743 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.000 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.000 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:10.000 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.000 17:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.259 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.259 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.259 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.259 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:10.519 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.519 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:10.519 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.519 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:10.778 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.778 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:10.778 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.778 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.039 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.039 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.039 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.039 17:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.298 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.298 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:11.559 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:11.559 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:11.817 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:11.817 17:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:13.187 17:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:13.187 17:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:13.187 17:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.187 17:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.187 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.187 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:13.187 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.187 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.444 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.444 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.444 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.444 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.701 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.701 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.701 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.701 17:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.958 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.958 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.958 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.958 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.214 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.214 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:14.215 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.215 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.472 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.472 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:14.472 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:14.734 17:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:14.992 17:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.371 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.628 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.628 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.628 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.628 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:16.886 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.886 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:16.886 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.886 17:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.143 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.143 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.143 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.143 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.400 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.400 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.400 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.400 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.658 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.658 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:17.658 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:17.915 17:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:18.173 17:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:19.105 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:19.105 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:19.105 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.105 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.363 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.363 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.363 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.363 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.621 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.621 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.621 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.621 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.877 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.878 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.878 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.878 17:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.135 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.135 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.135 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.135 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.393 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.393 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.393 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.393 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.649 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.649 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:20.649 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:20.952 17:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:21.209 17:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:22.155 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:22.155 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.155 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.155 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.411 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.411 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:22.411 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.411 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.668 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.668 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.668 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.668 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.925 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.925 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.925 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.925 17:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.183 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.183 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.183 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.183 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.442 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.442 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:23.442 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.442 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2304677 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2304677 ']' 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2304677 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2304677 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2304677' 00:22:23.701 killing process with pid 2304677 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2304677 00:22:23.701 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2304677 00:22:23.964 Connection closed with partial response: 00:22:23.964 00:22:23.964 00:22:23.964 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2304677 00:22:23.964 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.964 [2024-07-15 17:44:44.418969] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:23.964 [2024-07-15 17:44:44.419049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304677 ] 00:22:23.964 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.964 [2024-07-15 17:44:44.481698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.964 [2024-07-15 17:44:44.591112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.964 Running I/O for 90 seconds... 00:22:23.964 [2024-07-15 17:45:00.237443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.237972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.237995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-07-15 17:45:00.238355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:23.964 [2024-07-15 17:45:00.238377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.238953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.238969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:23.965 [2024-07-15 17:45:00.239925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-07-15 17:45:00.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.239983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.239999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.240959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.240983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.241003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.241028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.241044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.241068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.241084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.241108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-07-15 17:45:00.241124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:23.966 [2024-07-15 17:45:00.241148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.241179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.241203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.241219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.241257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.241273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.241296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.241949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.241972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-07-15 17:45:00.242018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.242785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.242988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.243047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.243098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.243146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:23.967 [2024-07-15 17:45:00.243255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.967 [2024-07-15 17:45:00.243271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:00.243669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:00.243686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-07-15 17:45:16.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-07-15 17:45:16.134801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-07-15 17:45:16.134844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.134893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.134934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.134996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-07-15 17:45:16.135012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.135971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.135987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.136009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.136025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.136046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.968 [2024-07-15 17:45:16.136062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:23.968 [2024-07-15 17:45:16.136084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.136832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.136967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.136984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.137022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.969 [2024-07-15 17:45:16.137060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.137098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.137135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.137177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:23.969 [2024-07-15 17:45:16.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-07-15 17:45:16.137233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.139654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.139716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.139754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.139791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-07-15 17:45:16.139828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-07-15 17:45:16.139894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-07-15 17:45:16.139936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.139974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.139996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.140033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.140071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.140115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.140153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:23.970 [2024-07-15 17:45:16.140192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.970 [2024-07-15 17:45:16.140208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:23.970 Received shutdown signal, test time was about 32.648782 seconds 00:22:23.970 00:22:23.970 Latency(us) 00:22:23.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.970 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.970 Verification LBA range: start 0x0 length 0x4000 00:22:23.970 Nvme0n1 : 32.65 7546.83 29.48 0.00 0.00 16930.45 1377.47 4026531.84 00:22:23.970 =================================================================================================================== 00:22:23.970 Total : 7546.83 29.48 0.00 0.00 16930.45 1377.47 4026531.84 00:22:23.970 17:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.228 rmmod nvme_tcp 00:22:24.228 rmmod nvme_fabrics 00:22:24.228 rmmod nvme_keyring 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2304380 ']' 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2304380 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2304380 ']' 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2304380 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2304380 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2304380' 00:22:24.228 killing process with pid 2304380 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2304380 00:22:24.228 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2304380 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.486 17:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.029 17:45:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:27.029 00:22:27.029 real 0m41.891s 00:22:27.029 user 1m59.398s 00:22:27.029 sys 0m13.259s 00:22:27.029 17:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.029 17:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:27.029 ************************************ 00:22:27.029 END TEST nvmf_host_multipath_status 00:22:27.029 ************************************ 00:22:27.029 17:45:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:27.029 17:45:21 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:27.029 17:45:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:27.029 17:45:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.029 17:45:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.029 ************************************ 00:22:27.029 START TEST nvmf_discovery_remove_ifc 00:22:27.029 ************************************ 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:27.029 * Looking for test storage... 00:22:27.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.029 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.030 17:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:28.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:28.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:28.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:28.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.933 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:22:28.934 00:22:28.934 --- 10.0.0.2 ping statistics --- 00:22:28.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.934 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:28.934 00:22:28.934 --- 10.0.0.1 ping statistics --- 00:22:28.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.934 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2311508 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2311508 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2311508 ']' 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.934 17:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.934 [2024-07-15 17:45:23.813849] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:28.934 [2024-07-15 17:45:23.813962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.934 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.934 [2024-07-15 17:45:23.873791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.934 [2024-07-15 17:45:23.973291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.934 [2024-07-15 17:45:23.973346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.934 [2024-07-15 17:45:23.973382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.934 [2024-07-15 17:45:23.973394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.934 [2024-07-15 17:45:23.973402] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.934 [2024-07-15 17:45:23.973426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.192 [2024-07-15 17:45:24.128560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.192 [2024-07-15 17:45:24.136747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:29.192 null0 00:22:29.192 [2024-07-15 17:45:24.168690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2311529 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2311529 /tmp/host.sock 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2311529 ']' 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:29.192 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.192 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:29.193 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.193 [2024-07-15 17:45:24.240873] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:22:29.193 [2024-07-15 17:45:24.240980] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311529 ] 00:22:29.193 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.193 [2024-07-15 17:45:24.307630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.451 [2024-07-15 17:45:24.426371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.451 17:45:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.826 [2024-07-15 17:45:25.627082] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:30.826 [2024-07-15 17:45:25.627118] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:30.826 [2024-07-15 17:45:25.627144] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.826 [2024-07-15 17:45:25.714460] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:30.826 [2024-07-15 17:45:25.939841] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:30.826 [2024-07-15 17:45:25.939930] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:30.826 [2024-07-15 17:45:25.939970] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:30.826 [2024-07-15 17:45:25.939996] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:30.826 [2024-07-15 17:45:25.940036] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.826 [2024-07-15 17:45:25.945373] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x772870 was disconnected and freed. delete nvme_qpair. 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.826 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.085 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:31.085 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:31.085 17:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.085 17:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.016 17:45:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.393 17:45:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.332 17:45:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.270 17:45:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.203 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.204 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.204 17:45:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.463 [2024-07-15 17:45:31.381373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:36.464 [2024-07-15 17:45:31.381445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.464 [2024-07-15 17:45:31.381471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-07-15 17:45:31.381491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.464 [2024-07-15 17:45:31.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-07-15 17:45:31.381522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.464 [2024-07-15 17:45:31.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-07-15 17:45:31.381554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.464 [2024-07-15 17:45:31.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-07-15 17:45:31.381584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.464 [2024-07-15 17:45:31.381599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.464 [2024-07-15 17:45:31.381613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x739300 is same with the state(5) to be set 00:22:36.464 [2024-07-15 17:45:31.391388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x739300 (9): Bad file descriptor 00:22:36.464 [2024-07-15 17:45:31.401440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.408 [2024-07-15 17:45:32.434939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:37.408 [2024-07-15 17:45:32.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x739300 with addr=10.0.0.2, port=4420 00:22:37.408 [2024-07-15 17:45:32.435059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x739300 is same with the state(5) to be set 00:22:37.408 [2024-07-15 17:45:32.435109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x739300 (9): Bad file descriptor 00:22:37.408 [2024-07-15 17:45:32.435605] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:37.408 [2024-07-15 17:45:32.435636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:37.408 [2024-07-15 17:45:32.435652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:37.408 [2024-07-15 17:45:32.435669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:37.408 [2024-07-15 17:45:32.435699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.408 [2024-07-15 17:45:32.435717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.408 17:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.401 [2024-07-15 17:45:33.438227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.401 [2024-07-15 17:45:33.438284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.401 [2024-07-15 17:45:33.438302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.401 [2024-07-15 17:45:33.438320] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:38.401 [2024-07-15 17:45:33.438351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.401 [2024-07-15 17:45:33.438402] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:38.401 [2024-07-15 17:45:33.438452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.401 [2024-07-15 17:45:33.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.401 [2024-07-15 17:45:33.438498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.401 [2024-07-15 17:45:33.438514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.401 [2024-07-15 17:45:33.438529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.401 [2024-07-15 17:45:33.438544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.401 [2024-07-15 17:45:33.438569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.402 [2024-07-15 17:45:33.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.402 [2024-07-15 17:45:33.438600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.402 [2024-07-15 17:45:33.438614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.402 [2024-07-15 17:45:33.438629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:38.402 [2024-07-15 17:45:33.438737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738780 (9): Bad file descriptor 00:22:38.402 [2024-07-15 17:45:33.439767] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:38.402 [2024-07-15 17:45:33.439793] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.402 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:38.661 17:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:39.601 17:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.537 [2024-07-15 17:45:35.494106] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:40.537 [2024-07-15 17:45:35.494138] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:40.537 [2024-07-15 17:45:35.494183] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.537 [2024-07-15 17:45:35.581464] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:40.537 17:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.794 [2024-07-15 17:45:35.805036] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:40.794 [2024-07-15 17:45:35.805083] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:40.794 [2024-07-15 17:45:35.805115] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:40.794 [2024-07-15 17:45:35.805137] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:40.794 [2024-07-15 17:45:35.805151] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:40.794 [2024-07-15 17:45:35.811732] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x740110 was disconnected and freed. delete nvme_qpair. 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2311529 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2311529 ']' 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2311529 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311529 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311529' 00:22:41.727 killing process with pid 2311529 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2311529 00:22:41.727 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2311529 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.986 17:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.986 rmmod nvme_tcp 00:22:41.986 rmmod nvme_fabrics 00:22:41.986 rmmod nvme_keyring 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2311508 ']' 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2311508 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2311508 ']' 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2311508 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2311508 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2311508' 00:22:41.986 killing process with pid 2311508 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2311508 00:22:41.986 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2311508 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.244 17:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.781 17:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.781 00:22:44.781 real 0m17.710s 00:22:44.781 user 0m25.903s 00:22:44.781 sys 0m2.963s 00:22:44.781 17:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.781 17:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.781 ************************************ 00:22:44.781 END TEST nvmf_discovery_remove_ifc 00:22:44.781 ************************************ 00:22:44.781 17:45:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.781 17:45:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:44.781 17:45:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.781 17:45:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.781 17:45:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.781 ************************************ 00:22:44.781 START TEST nvmf_identify_kernel_target 00:22:44.781 ************************************ 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:44.781 * Looking for test storage... 00:22:44.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.781 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.782 17:45:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.687 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.687 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:22:46.688 00:22:46.688 --- 10.0.0.2 ping statistics --- 00:22:46.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.688 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:46.688 00:22:46.688 --- 10.0.0.1 ping statistics --- 00:22:46.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.688 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:46.688 17:45:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:47.624 Waiting for block devices as requested 00:22:47.624 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:47.884 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:47.884 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:48.142 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:48.142 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.142 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.142 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.399 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.399 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.399 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:48.399 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:48.399 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:48.657 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:48.657 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:48.657 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:48.657 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:48.915 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:48.915 No valid GPT data, bailing 00:22:48.915 17:45:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:48.915 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:49.173 00:22:49.173 Discovery Log Number of Records 2, Generation counter 2 00:22:49.173 =====Discovery Log Entry 0====== 00:22:49.173 trtype: tcp 00:22:49.174 adrfam: ipv4 00:22:49.174 subtype: current discovery subsystem 00:22:49.174 treq: not specified, sq flow control disable supported 00:22:49.174 portid: 1 00:22:49.174 trsvcid: 4420 00:22:49.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:49.174 traddr: 10.0.0.1 00:22:49.174 eflags: none 00:22:49.174 sectype: none 00:22:49.174 =====Discovery Log Entry 1====== 00:22:49.174 trtype: tcp 00:22:49.174 adrfam: ipv4 00:22:49.174 subtype: nvme subsystem 00:22:49.174 treq: not specified, sq flow control disable supported 00:22:49.174 portid: 1 00:22:49.174 trsvcid: 4420 00:22:49.174 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:49.174 traddr: 10.0.0.1 00:22:49.174 eflags: none 00:22:49.174 sectype: none 00:22:49.174 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:49.174 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:49.174 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.174 ===================================================== 00:22:49.174 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:49.174 ===================================================== 00:22:49.174 Controller Capabilities/Features 00:22:49.174 ================================ 00:22:49.174 Vendor ID: 0000 00:22:49.174 Subsystem Vendor ID: 0000 00:22:49.174 Serial Number: 496bfee951425bd3a4a6 00:22:49.174 Model Number: Linux 00:22:49.174 Firmware Version: 6.7.0-68 00:22:49.174 Recommended Arb Burst: 0 00:22:49.174 IEEE OUI Identifier: 00 00 00 00:22:49.174 Multi-path I/O 00:22:49.174 May have multiple subsystem ports: No 00:22:49.174 May have multiple controllers: No 00:22:49.174 Associated with SR-IOV VF: No 00:22:49.174 Max Data Transfer Size: Unlimited 00:22:49.174 Max Number of Namespaces: 0 00:22:49.174 Max Number of I/O Queues: 1024 00:22:49.174 NVMe Specification Version (VS): 1.3 00:22:49.174 NVMe Specification Version (Identify): 1.3 00:22:49.174 Maximum Queue Entries: 1024 00:22:49.174 Contiguous Queues Required: No 00:22:49.174 Arbitration Mechanisms Supported 00:22:49.174 Weighted Round Robin: Not Supported 00:22:49.174 Vendor Specific: Not Supported 00:22:49.174 Reset Timeout: 7500 ms 00:22:49.174 Doorbell Stride: 4 bytes 00:22:49.174 NVM Subsystem Reset: Not Supported 00:22:49.174 Command Sets Supported 00:22:49.174 NVM Command Set: Supported 00:22:49.174 Boot Partition: Not Supported 00:22:49.174 Memory Page Size Minimum: 4096 bytes 00:22:49.174 Memory Page Size Maximum: 4096 bytes 00:22:49.174 Persistent Memory Region: Not Supported 00:22:49.174 Optional Asynchronous Events Supported 00:22:49.174 Namespace Attribute Notices: Not Supported 00:22:49.174 Firmware Activation Notices: Not Supported 00:22:49.174 ANA Change Notices: Not Supported 00:22:49.174 PLE Aggregate Log Change Notices: Not Supported 00:22:49.174 LBA Status Info Alert Notices: Not Supported 00:22:49.174 EGE Aggregate Log Change Notices: Not Supported 00:22:49.174 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.174 Zone Descriptor Change Notices: Not Supported 00:22:49.174 Discovery Log Change Notices: Supported 00:22:49.174 Controller Attributes 00:22:49.174 128-bit Host Identifier: Not Supported 00:22:49.174 Non-Operational Permissive Mode: Not Supported 00:22:49.174 NVM Sets: Not Supported 00:22:49.174 Read Recovery Levels: Not Supported 00:22:49.174 Endurance Groups: Not Supported 00:22:49.174 Predictable Latency Mode: Not Supported 00:22:49.174 Traffic Based Keep ALive: Not Supported 00:22:49.174 Namespace Granularity: Not Supported 00:22:49.174 SQ Associations: Not Supported 00:22:49.174 UUID List: Not Supported 00:22:49.174 Multi-Domain Subsystem: Not Supported 00:22:49.174 Fixed Capacity Management: Not Supported 00:22:49.174 Variable Capacity Management: Not Supported 00:22:49.174 Delete Endurance Group: Not Supported 00:22:49.174 Delete NVM Set: Not Supported 00:22:49.174 Extended LBA Formats Supported: Not Supported 00:22:49.174 Flexible Data Placement Supported: Not Supported 00:22:49.174 00:22:49.174 Controller Memory Buffer Support 00:22:49.174 ================================ 00:22:49.174 Supported: No 00:22:49.174 00:22:49.174 Persistent Memory Region Support 00:22:49.174 ================================ 00:22:49.174 Supported: No 00:22:49.174 00:22:49.174 Admin Command Set Attributes 00:22:49.174 ============================ 00:22:49.174 Security Send/Receive: Not Supported 00:22:49.174 Format NVM: Not Supported 00:22:49.174 Firmware Activate/Download: Not Supported 00:22:49.174 Namespace Management: Not Supported 00:22:49.174 Device Self-Test: Not Supported 00:22:49.174 Directives: Not Supported 00:22:49.174 NVMe-MI: Not Supported 00:22:49.174 Virtualization Management: Not Supported 00:22:49.174 Doorbell Buffer Config: Not Supported 00:22:49.174 Get LBA Status Capability: Not Supported 00:22:49.174 Command & Feature Lockdown Capability: Not Supported 00:22:49.174 Abort Command Limit: 1 00:22:49.174 Async Event Request Limit: 1 00:22:49.174 Number of Firmware Slots: N/A 00:22:49.174 Firmware Slot 1 Read-Only: N/A 00:22:49.174 Firmware Activation Without Reset: N/A 00:22:49.174 Multiple Update Detection Support: N/A 00:22:49.174 Firmware Update Granularity: No Information Provided 00:22:49.174 Per-Namespace SMART Log: No 00:22:49.174 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.174 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:49.174 Command Effects Log Page: Not Supported 00:22:49.174 Get Log Page Extended Data: Supported 00:22:49.174 Telemetry Log Pages: Not Supported 00:22:49.174 Persistent Event Log Pages: Not Supported 00:22:49.174 Supported Log Pages Log Page: May Support 00:22:49.174 Commands Supported & Effects Log Page: Not Supported 00:22:49.174 Feature Identifiers & Effects Log Page:May Support 00:22:49.174 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.174 Data Area 4 for Telemetry Log: Not Supported 00:22:49.174 Error Log Page Entries Supported: 1 00:22:49.174 Keep Alive: Not Supported 00:22:49.174 00:22:49.174 NVM Command Set Attributes 00:22:49.174 ========================== 00:22:49.174 Submission Queue Entry Size 00:22:49.174 Max: 1 00:22:49.174 Min: 1 00:22:49.174 Completion Queue Entry Size 00:22:49.174 Max: 1 00:22:49.174 Min: 1 00:22:49.174 Number of Namespaces: 0 00:22:49.174 Compare Command: Not Supported 00:22:49.174 Write Uncorrectable Command: Not Supported 00:22:49.174 Dataset Management Command: Not Supported 00:22:49.174 Write Zeroes Command: Not Supported 00:22:49.174 Set Features Save Field: Not Supported 00:22:49.174 Reservations: Not Supported 00:22:49.174 Timestamp: Not Supported 00:22:49.174 Copy: Not Supported 00:22:49.174 Volatile Write Cache: Not Present 00:22:49.174 Atomic Write Unit (Normal): 1 00:22:49.174 Atomic Write Unit (PFail): 1 00:22:49.174 Atomic Compare & Write Unit: 1 00:22:49.174 Fused Compare & Write: Not Supported 00:22:49.174 Scatter-Gather List 00:22:49.174 SGL Command Set: Supported 00:22:49.174 SGL Keyed: Not Supported 00:22:49.174 SGL Bit Bucket Descriptor: Not Supported 00:22:49.174 SGL Metadata Pointer: Not Supported 00:22:49.174 Oversized SGL: Not Supported 00:22:49.174 SGL Metadata Address: Not Supported 00:22:49.174 SGL Offset: Supported 00:22:49.174 Transport SGL Data Block: Not Supported 00:22:49.174 Replay Protected Memory Block: Not Supported 00:22:49.174 00:22:49.174 Firmware Slot Information 00:22:49.174 ========================= 00:22:49.174 Active slot: 0 00:22:49.174 00:22:49.174 00:22:49.174 Error Log 00:22:49.174 ========= 00:22:49.174 00:22:49.174 Active Namespaces 00:22:49.174 ================= 00:22:49.174 Discovery Log Page 00:22:49.174 ================== 00:22:49.174 Generation Counter: 2 00:22:49.174 Number of Records: 2 00:22:49.174 Record Format: 0 00:22:49.174 00:22:49.174 Discovery Log Entry 0 00:22:49.174 ---------------------- 00:22:49.174 Transport Type: 3 (TCP) 00:22:49.174 Address Family: 1 (IPv4) 00:22:49.174 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:49.174 Entry Flags: 00:22:49.174 Duplicate Returned Information: 0 00:22:49.174 Explicit Persistent Connection Support for Discovery: 0 00:22:49.174 Transport Requirements: 00:22:49.174 Secure Channel: Not Specified 00:22:49.174 Port ID: 1 (0x0001) 00:22:49.174 Controller ID: 65535 (0xffff) 00:22:49.174 Admin Max SQ Size: 32 00:22:49.174 Transport Service Identifier: 4420 00:22:49.174 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:49.174 Transport Address: 10.0.0.1 00:22:49.174 Discovery Log Entry 1 00:22:49.174 ---------------------- 00:22:49.174 Transport Type: 3 (TCP) 00:22:49.174 Address Family: 1 (IPv4) 00:22:49.174 Subsystem Type: 2 (NVM Subsystem) 00:22:49.174 Entry Flags: 00:22:49.174 Duplicate Returned Information: 0 00:22:49.174 Explicit Persistent Connection Support for Discovery: 0 00:22:49.174 Transport Requirements: 00:22:49.174 Secure Channel: Not Specified 00:22:49.174 Port ID: 1 (0x0001) 00:22:49.174 Controller ID: 65535 (0xffff) 00:22:49.174 Admin Max SQ Size: 32 00:22:49.174 Transport Service Identifier: 4420 00:22:49.174 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:49.174 Transport Address: 10.0.0.1 00:22:49.175 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:49.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.433 get_feature(0x01) failed 00:22:49.433 get_feature(0x02) failed 00:22:49.433 get_feature(0x04) failed 00:22:49.433 ===================================================== 00:22:49.433 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:49.433 ===================================================== 00:22:49.433 Controller Capabilities/Features 00:22:49.433 ================================ 00:22:49.433 Vendor ID: 0000 00:22:49.433 Subsystem Vendor ID: 0000 00:22:49.433 Serial Number: 6594cfe3947f2d0de13e 00:22:49.433 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:49.433 Firmware Version: 6.7.0-68 00:22:49.433 Recommended Arb Burst: 6 00:22:49.433 IEEE OUI Identifier: 00 00 00 00:22:49.433 Multi-path I/O 00:22:49.433 May have multiple subsystem ports: Yes 00:22:49.433 May have multiple controllers: Yes 00:22:49.433 Associated with SR-IOV VF: No 00:22:49.433 Max Data Transfer Size: Unlimited 00:22:49.433 Max Number of Namespaces: 1024 00:22:49.433 Max Number of I/O Queues: 128 00:22:49.433 NVMe Specification Version (VS): 1.3 00:22:49.433 NVMe Specification Version (Identify): 1.3 00:22:49.433 Maximum Queue Entries: 1024 00:22:49.433 Contiguous Queues Required: No 00:22:49.433 Arbitration Mechanisms Supported 00:22:49.433 Weighted Round Robin: Not Supported 00:22:49.433 Vendor Specific: Not Supported 00:22:49.433 Reset Timeout: 7500 ms 00:22:49.433 Doorbell Stride: 4 bytes 00:22:49.433 NVM Subsystem Reset: Not Supported 00:22:49.433 Command Sets Supported 00:22:49.433 NVM Command Set: Supported 00:22:49.433 Boot Partition: Not Supported 00:22:49.433 Memory Page Size Minimum: 4096 bytes 00:22:49.433 Memory Page Size Maximum: 4096 bytes 00:22:49.433 Persistent Memory Region: Not Supported 00:22:49.433 Optional Asynchronous Events Supported 00:22:49.433 Namespace Attribute Notices: Supported 00:22:49.433 Firmware Activation Notices: Not Supported 00:22:49.433 ANA Change Notices: Supported 00:22:49.433 PLE Aggregate Log Change Notices: Not Supported 00:22:49.433 LBA Status Info Alert Notices: Not Supported 00:22:49.433 EGE Aggregate Log Change Notices: Not Supported 00:22:49.433 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.433 Zone Descriptor Change Notices: Not Supported 00:22:49.433 Discovery Log Change Notices: Not Supported 00:22:49.433 Controller Attributes 00:22:49.433 128-bit Host Identifier: Supported 00:22:49.433 Non-Operational Permissive Mode: Not Supported 00:22:49.433 NVM Sets: Not Supported 00:22:49.433 Read Recovery Levels: Not Supported 00:22:49.433 Endurance Groups: Not Supported 00:22:49.433 Predictable Latency Mode: Not Supported 00:22:49.433 Traffic Based Keep ALive: Supported 00:22:49.433 Namespace Granularity: Not Supported 00:22:49.433 SQ Associations: Not Supported 00:22:49.433 UUID List: Not Supported 00:22:49.433 Multi-Domain Subsystem: Not Supported 00:22:49.433 Fixed Capacity Management: Not Supported 00:22:49.433 Variable Capacity Management: Not Supported 00:22:49.433 Delete Endurance Group: Not Supported 00:22:49.433 Delete NVM Set: Not Supported 00:22:49.433 Extended LBA Formats Supported: Not Supported 00:22:49.433 Flexible Data Placement Supported: Not Supported 00:22:49.433 00:22:49.433 Controller Memory Buffer Support 00:22:49.433 ================================ 00:22:49.433 Supported: No 00:22:49.433 00:22:49.433 Persistent Memory Region Support 00:22:49.433 ================================ 00:22:49.433 Supported: No 00:22:49.433 00:22:49.433 Admin Command Set Attributes 00:22:49.433 ============================ 00:22:49.433 Security Send/Receive: Not Supported 00:22:49.433 Format NVM: Not Supported 00:22:49.433 Firmware Activate/Download: Not Supported 00:22:49.433 Namespace Management: Not Supported 00:22:49.433 Device Self-Test: Not Supported 00:22:49.433 Directives: Not Supported 00:22:49.433 NVMe-MI: Not Supported 00:22:49.433 Virtualization Management: Not Supported 00:22:49.433 Doorbell Buffer Config: Not Supported 00:22:49.433 Get LBA Status Capability: Not Supported 00:22:49.433 Command & Feature Lockdown Capability: Not Supported 00:22:49.433 Abort Command Limit: 4 00:22:49.433 Async Event Request Limit: 4 00:22:49.433 Number of Firmware Slots: N/A 00:22:49.433 Firmware Slot 1 Read-Only: N/A 00:22:49.433 Firmware Activation Without Reset: N/A 00:22:49.434 Multiple Update Detection Support: N/A 00:22:49.434 Firmware Update Granularity: No Information Provided 00:22:49.434 Per-Namespace SMART Log: Yes 00:22:49.434 Asymmetric Namespace Access Log Page: Supported 00:22:49.434 ANA Transition Time : 10 sec 00:22:49.434 00:22:49.434 Asymmetric Namespace Access Capabilities 00:22:49.434 ANA Optimized State : Supported 00:22:49.434 ANA Non-Optimized State : Supported 00:22:49.434 ANA Inaccessible State : Supported 00:22:49.434 ANA Persistent Loss State : Supported 00:22:49.434 ANA Change State : Supported 00:22:49.434 ANAGRPID is not changed : No 00:22:49.434 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:49.434 00:22:49.434 ANA Group Identifier Maximum : 128 00:22:49.434 Number of ANA Group Identifiers : 128 00:22:49.434 Max Number of Allowed Namespaces : 1024 00:22:49.434 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:49.434 Command Effects Log Page: Supported 00:22:49.434 Get Log Page Extended Data: Supported 00:22:49.434 Telemetry Log Pages: Not Supported 00:22:49.434 Persistent Event Log Pages: Not Supported 00:22:49.434 Supported Log Pages Log Page: May Support 00:22:49.434 Commands Supported & Effects Log Page: Not Supported 00:22:49.434 Feature Identifiers & Effects Log Page:May Support 00:22:49.434 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.434 Data Area 4 for Telemetry Log: Not Supported 00:22:49.434 Error Log Page Entries Supported: 128 00:22:49.434 Keep Alive: Supported 00:22:49.434 Keep Alive Granularity: 1000 ms 00:22:49.434 00:22:49.434 NVM Command Set Attributes 00:22:49.434 ========================== 00:22:49.434 Submission Queue Entry Size 00:22:49.434 Max: 64 00:22:49.434 Min: 64 00:22:49.434 Completion Queue Entry Size 00:22:49.434 Max: 16 00:22:49.434 Min: 16 00:22:49.434 Number of Namespaces: 1024 00:22:49.434 Compare Command: Not Supported 00:22:49.434 Write Uncorrectable Command: Not Supported 00:22:49.434 Dataset Management Command: Supported 00:22:49.434 Write Zeroes Command: Supported 00:22:49.434 Set Features Save Field: Not Supported 00:22:49.434 Reservations: Not Supported 00:22:49.434 Timestamp: Not Supported 00:22:49.434 Copy: Not Supported 00:22:49.434 Volatile Write Cache: Present 00:22:49.434 Atomic Write Unit (Normal): 1 00:22:49.434 Atomic Write Unit (PFail): 1 00:22:49.434 Atomic Compare & Write Unit: 1 00:22:49.434 Fused Compare & Write: Not Supported 00:22:49.434 Scatter-Gather List 00:22:49.434 SGL Command Set: Supported 00:22:49.434 SGL Keyed: Not Supported 00:22:49.434 SGL Bit Bucket Descriptor: Not Supported 00:22:49.434 SGL Metadata Pointer: Not Supported 00:22:49.434 Oversized SGL: Not Supported 00:22:49.434 SGL Metadata Address: Not Supported 00:22:49.434 SGL Offset: Supported 00:22:49.434 Transport SGL Data Block: Not Supported 00:22:49.434 Replay Protected Memory Block: Not Supported 00:22:49.434 00:22:49.434 Firmware Slot Information 00:22:49.434 ========================= 00:22:49.434 Active slot: 0 00:22:49.434 00:22:49.434 Asymmetric Namespace Access 00:22:49.434 =========================== 00:22:49.434 Change Count : 0 00:22:49.434 Number of ANA Group Descriptors : 1 00:22:49.434 ANA Group Descriptor : 0 00:22:49.434 ANA Group ID : 1 00:22:49.434 Number of NSID Values : 1 00:22:49.434 Change Count : 0 00:22:49.434 ANA State : 1 00:22:49.434 Namespace Identifier : 1 00:22:49.434 00:22:49.434 Commands Supported and Effects 00:22:49.434 ============================== 00:22:49.434 Admin Commands 00:22:49.434 -------------- 00:22:49.434 Get Log Page (02h): Supported 00:22:49.434 Identify (06h): Supported 00:22:49.434 Abort (08h): Supported 00:22:49.434 Set Features (09h): Supported 00:22:49.434 Get Features (0Ah): Supported 00:22:49.434 Asynchronous Event Request (0Ch): Supported 00:22:49.434 Keep Alive (18h): Supported 00:22:49.434 I/O Commands 00:22:49.434 ------------ 00:22:49.434 Flush (00h): Supported 00:22:49.434 Write (01h): Supported LBA-Change 00:22:49.434 Read (02h): Supported 00:22:49.434 Write Zeroes (08h): Supported LBA-Change 00:22:49.434 Dataset Management (09h): Supported 00:22:49.434 00:22:49.434 Error Log 00:22:49.434 ========= 00:22:49.434 Entry: 0 00:22:49.434 Error Count: 0x3 00:22:49.434 Submission Queue Id: 0x0 00:22:49.434 Command Id: 0x5 00:22:49.434 Phase Bit: 0 00:22:49.434 Status Code: 0x2 00:22:49.434 Status Code Type: 0x0 00:22:49.434 Do Not Retry: 1 00:22:49.434 Error Location: 0x28 00:22:49.434 LBA: 0x0 00:22:49.434 Namespace: 0x0 00:22:49.434 Vendor Log Page: 0x0 00:22:49.434 ----------- 00:22:49.434 Entry: 1 00:22:49.434 Error Count: 0x2 00:22:49.434 Submission Queue Id: 0x0 00:22:49.434 Command Id: 0x5 00:22:49.434 Phase Bit: 0 00:22:49.434 Status Code: 0x2 00:22:49.434 Status Code Type: 0x0 00:22:49.434 Do Not Retry: 1 00:22:49.434 Error Location: 0x28 00:22:49.434 LBA: 0x0 00:22:49.434 Namespace: 0x0 00:22:49.434 Vendor Log Page: 0x0 00:22:49.434 ----------- 00:22:49.434 Entry: 2 00:22:49.434 Error Count: 0x1 00:22:49.434 Submission Queue Id: 0x0 00:22:49.434 Command Id: 0x4 00:22:49.434 Phase Bit: 0 00:22:49.434 Status Code: 0x2 00:22:49.434 Status Code Type: 0x0 00:22:49.434 Do Not Retry: 1 00:22:49.434 Error Location: 0x28 00:22:49.434 LBA: 0x0 00:22:49.434 Namespace: 0x0 00:22:49.434 Vendor Log Page: 0x0 00:22:49.434 00:22:49.434 Number of Queues 00:22:49.434 ================ 00:22:49.434 Number of I/O Submission Queues: 128 00:22:49.434 Number of I/O Completion Queues: 128 00:22:49.434 00:22:49.434 ZNS Specific Controller Data 00:22:49.434 ============================ 00:22:49.434 Zone Append Size Limit: 0 00:22:49.434 00:22:49.434 00:22:49.434 Active Namespaces 00:22:49.434 ================= 00:22:49.434 get_feature(0x05) failed 00:22:49.434 Namespace ID:1 00:22:49.434 Command Set Identifier: NVM (00h) 00:22:49.434 Deallocate: Supported 00:22:49.434 Deallocated/Unwritten Error: Not Supported 00:22:49.434 Deallocated Read Value: Unknown 00:22:49.434 Deallocate in Write Zeroes: Not Supported 00:22:49.434 Deallocated Guard Field: 0xFFFF 00:22:49.434 Flush: Supported 00:22:49.434 Reservation: Not Supported 00:22:49.434 Namespace Sharing Capabilities: Multiple Controllers 00:22:49.434 Size (in LBAs): 1953525168 (931GiB) 00:22:49.434 Capacity (in LBAs): 1953525168 (931GiB) 00:22:49.434 Utilization (in LBAs): 1953525168 (931GiB) 00:22:49.434 UUID: efb26c63-a239-453f-b258-a3bed793dad5 00:22:49.434 Thin Provisioning: Not Supported 00:22:49.434 Per-NS Atomic Units: Yes 00:22:49.434 Atomic Boundary Size (Normal): 0 00:22:49.434 Atomic Boundary Size (PFail): 0 00:22:49.434 Atomic Boundary Offset: 0 00:22:49.434 NGUID/EUI64 Never Reused: No 00:22:49.434 ANA group ID: 1 00:22:49.434 Namespace Write Protected: No 00:22:49.434 Number of LBA Formats: 1 00:22:49.434 Current LBA Format: LBA Format #00 00:22:49.434 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:49.434 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.434 rmmod nvme_tcp 00:22:49.434 rmmod nvme_fabrics 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.434 17:45:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:51.341 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:51.601 17:45:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:52.534 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:52.534 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:52.534 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:52.534 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:52.535 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:52.535 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:52.535 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:52.794 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:52.794 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:52.794 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:53.787 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:53.787 00:22:53.787 real 0m9.249s 00:22:53.787 user 0m1.994s 00:22:53.787 sys 0m3.257s 00:22:53.787 17:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.787 17:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.787 ************************************ 00:22:53.787 END TEST nvmf_identify_kernel_target 00:22:53.787 ************************************ 00:22:53.787 17:45:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.787 17:45:48 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:53.787 17:45:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.787 17:45:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.787 17:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.787 ************************************ 00:22:53.787 START TEST nvmf_auth_host 00:22:53.787 ************************************ 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:53.787 * Looking for test storage... 00:22:53.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.787 17:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.693 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:55.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:55.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:55.694 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:55.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:22:55.694 00:22:55.694 --- 10.0.0.2 ping statistics --- 00:22:55.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.694 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:22:55.694 00:22:55.694 --- 10.0.0.1 ping statistics --- 00:22:55.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.694 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2318614 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2318614 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2318614 ']' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.694 17:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34a1c45b5914501fe296392d06aaeb6d 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sHw 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34a1c45b5914501fe296392d06aaeb6d 0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34a1c45b5914501fe296392d06aaeb6d 0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34a1c45b5914501fe296392d06aaeb6d 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sHw 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sHw 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.sHw 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bd44dc75ed56715f705356978e21bda63e1b84e90335edc47d870b53fe07dbfa 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fjR 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bd44dc75ed56715f705356978e21bda63e1b84e90335edc47d870b53fe07dbfa 3 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bd44dc75ed56715f705356978e21bda63e1b84e90335edc47d870b53fe07dbfa 3 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bd44dc75ed56715f705356978e21bda63e1b84e90335edc47d870b53fe07dbfa 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fjR 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fjR 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fjR 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=95a99dcc1c2b60ff51e4e25b57d27c38574445bd2e6dfc0b 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.v8p 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 95a99dcc1c2b60ff51e4e25b57d27c38574445bd2e6dfc0b 0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 95a99dcc1c2b60ff51e4e25b57d27c38574445bd2e6dfc0b 0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=95a99dcc1c2b60ff51e4e25b57d27c38574445bd2e6dfc0b 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.v8p 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.v8p 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.v8p 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed3c1d6f506aa45053b421ed481ee63ccf1ecfe16d2707c6 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fvA 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed3c1d6f506aa45053b421ed481ee63ccf1ecfe16d2707c6 2 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed3c1d6f506aa45053b421ed481ee63ccf1ecfe16d2707c6 2 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed3c1d6f506aa45053b421ed481ee63ccf1ecfe16d2707c6 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:56.264 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fvA 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fvA 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fvA 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=861a712fe60c3e1e9c08fe17968b3d34 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VgI 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 861a712fe60c3e1e9c08fe17968b3d34 1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 861a712fe60c3e1e9c08fe17968b3d34 1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=861a712fe60c3e1e9c08fe17968b3d34 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VgI 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VgI 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VgI 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c67b270624ed377514fdd041e303553b 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Faa 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c67b270624ed377514fdd041e303553b 1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c67b270624ed377514fdd041e303553b 1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c67b270624ed377514fdd041e303553b 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Faa 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Faa 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Faa 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0fc03efc25a0e56034f2a67d5dcbd36ec240019e19d9ccb3 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qbk 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0fc03efc25a0e56034f2a67d5dcbd36ec240019e19d9ccb3 2 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0fc03efc25a0e56034f2a67d5dcbd36ec240019e19d9ccb3 2 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0fc03efc25a0e56034f2a67d5dcbd36ec240019e19d9ccb3 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qbk 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qbk 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Qbk 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.523 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c999b8067b6dc02120bd1f882bd42da1 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.I4m 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c999b8067b6dc02120bd1f882bd42da1 0 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c999b8067b6dc02120bd1f882bd42da1 0 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c999b8067b6dc02120bd1f882bd42da1 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.I4m 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.I4m 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.I4m 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=231e7b6afb0af99520ee73dce624dd097919c815456bd8c37b1d3c373aeb62c9 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1ns 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 231e7b6afb0af99520ee73dce624dd097919c815456bd8c37b1d3c373aeb62c9 3 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 231e7b6afb0af99520ee73dce624dd097919c815456bd8c37b1d3c373aeb62c9 3 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=231e7b6afb0af99520ee73dce624dd097919c815456bd8c37b1d3c373aeb62c9 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:56.524 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1ns 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1ns 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1ns 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2318614 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2318614 ']' 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.781 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.782 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sHw 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fjR ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fjR 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.v8p 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fvA ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fvA 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VgI 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Faa ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Faa 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Qbk 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.I4m ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.I4m 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1ns 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:57.041 17:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:57.974 Waiting for block devices as requested 00:22:57.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:58.232 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:58.232 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:58.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:58.491 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:58.491 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:58.491 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:58.491 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:58.749 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:58.749 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:58.749 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:58.749 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:59.006 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:59.006 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:59.006 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:59.263 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:59.263 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:59.830 No valid GPT data, bailing 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:59.830 00:22:59.830 Discovery Log Number of Records 2, Generation counter 2 00:22:59.830 =====Discovery Log Entry 0====== 00:22:59.830 trtype: tcp 00:22:59.830 adrfam: ipv4 00:22:59.830 subtype: current discovery subsystem 00:22:59.830 treq: not specified, sq flow control disable supported 00:22:59.830 portid: 1 00:22:59.830 trsvcid: 4420 00:22:59.830 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:59.830 traddr: 10.0.0.1 00:22:59.830 eflags: none 00:22:59.830 sectype: none 00:22:59.830 =====Discovery Log Entry 1====== 00:22:59.830 trtype: tcp 00:22:59.830 adrfam: ipv4 00:22:59.830 subtype: nvme subsystem 00:22:59.830 treq: not specified, sq flow control disable supported 00:22:59.830 portid: 1 00:22:59.830 trsvcid: 4420 00:22:59.830 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:59.830 traddr: 10.0.0.1 00:22:59.830 eflags: none 00:22:59.830 sectype: none 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.830 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.088 nvme0n1 00:23:00.088 17:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.088 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.089 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.346 nvme0n1 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.346 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 nvme0n1 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.347 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.604 nvme0n1 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.604 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.862 nvme0n1 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.862 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.863 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.120 17:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.120 nvme0n1 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.120 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.121 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.378 nvme0n1 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.378 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.379 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.636 nvme0n1 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:01.636 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.637 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.895 nvme0n1 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.895 17:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.157 nvme0n1 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.157 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.417 nvme0n1 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.417 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.676 nvme0n1 00:23:02.676 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.676 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.676 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.676 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.676 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.933 17:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.190 nvme0n1 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.190 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.449 nvme0n1 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.449 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 nvme0n1 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.013 17:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.271 nvme0n1 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.271 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.835 nvme0n1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.835 17:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.401 nvme0n1 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.401 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.979 nvme0n1 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.980 17:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.980 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.547 nvme0n1 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.547 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.548 17:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.147 nvme0n1 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.147 17:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.086 nvme0n1 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.086 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.087 17:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.087 17:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.087 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.087 17:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 nvme0n1 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.463 17:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.403 nvme0n1 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.403 17:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.343 nvme0n1 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.343 17:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.279 nvme0n1 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.279 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.537 nvme0n1 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.537 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.538 nvme0n1 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.538 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 nvme0n1 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.796 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.056 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.057 nvme0n1 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.057 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 nvme0n1 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.318 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.579 nvme0n1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.579 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.838 nvme0n1 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.838 17:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.099 nvme0n1 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.099 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.100 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.359 nvme0n1 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.359 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.360 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.619 nvme0n1 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.619 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.877 nvme0n1 00:23:14.877 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.877 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.877 17:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.877 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.877 17:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.877 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.135 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.394 nvme0n1 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.394 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.653 nvme0n1 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.653 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.912 17:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.171 nvme0n1 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.171 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 nvme0n1 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.465 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.466 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 nvme0n1 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 17:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.035 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.604 nvme0n1 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.604 17:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 nvme0n1 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.173 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.738 nvme0n1 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.738 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.998 17:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.566 nvme0n1 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.566 17:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 nvme0n1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.499 17:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.436 nvme0n1 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.436 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.696 17:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.635 nvme0n1 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.635 17:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.574 nvme0n1 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.574 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.575 17:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 nvme0n1 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 nvme0n1 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.514 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.772 nvme0n1 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.772 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.030 nvme0n1 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.030 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.289 nvme0n1 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.289 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.290 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.549 nvme0n1 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.549 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.807 nvme0n1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.807 17:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 nvme0n1 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.065 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.324 nvme0n1 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:26.324 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.325 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 nvme0n1 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.584 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.843 nvme0n1 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.843 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.844 17:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.103 nvme0n1 00:23:27.103 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.103 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.103 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.103 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.103 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.362 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.363 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.621 nvme0n1 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:27.621 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.622 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 nvme0n1 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.882 17:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.882 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.450 nvme0n1 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.450 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.709 nvme0n1 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.709 17:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.275 nvme0n1 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.275 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.276 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.844 nvme0n1 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.844 17:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.411 nvme0n1 00:23:30.411 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.412 17:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.978 nvme0n1 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.978 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.235 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.798 nvme0n1 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:31.798 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzRhMWM0NWI1OTE0NTAxZmUyOTYzOTJkMDZhYWViNmTJEfqq: 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: ]] 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmQ0NGRjNzVlZDU2NzE1ZjcwNTM1Njk3OGUyMWJkYTYzZTFiODRlOTAzMzVlZGM0N2Q4NzBiNTNmZTA3ZGJmYardRIk=: 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.799 17:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.734 nvme0n1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.734 17:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.671 nvme0n1 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODYxYTcxMmZlNjBjM2UxZTljMDhmZTE3OTY4YjNkMzQYG471: 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzY3YjI3MDYyNGVkMzc3NTE0ZmRkMDQxZTMwMzU1M2Iw/dxV: 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.671 17:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.613 nvme0n1 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.613 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGZjMDNlZmMyNWEwZTU2MDM0ZjJhNjdkNWRjYmQzNmVjMjQwMDE5ZTE5ZDljY2IzS6+ASQ==: 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzk5OWI4MDY3YjZkYzAyMTIwYmQxZjg4MmJkNDJkYTF5hbQ4: 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.872 17:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.810 nvme0n1 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjMxZTdiNmFmYjBhZjk5NTIwZWU3M2RjZTYyNGRkMDk3OTE5YzgxNTQ1NmJkOGMzN2IxZDNjMzczYWViNjJjOSJQsKA=: 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.810 17:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.746 nvme0n1 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVhOTlkY2MxYzJiNjBmZjUxZTRlMjViNTdkMjdjMzg1NzQ0NDViZDJlNmRmYzBihmrMdw==: 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWQzYzFkNmY1MDZhYTQ1MDUzYjQyMWVkNDgxZWU2M2NjZjFlY2ZlMTZkMjcwN2M2NhVITA==: 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.746 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.747 request: 00:23:36.747 { 00:23:36.747 "name": "nvme0", 00:23:36.747 "trtype": "tcp", 00:23:36.747 "traddr": "10.0.0.1", 00:23:36.747 "adrfam": "ipv4", 00:23:36.747 "trsvcid": "4420", 00:23:36.747 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:36.747 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:36.747 "prchk_reftag": false, 00:23:36.747 "prchk_guard": false, 00:23:36.747 "hdgst": false, 00:23:36.747 "ddgst": false, 00:23:36.747 "method": "bdev_nvme_attach_controller", 00:23:36.747 "req_id": 1 00:23:36.747 } 00:23:36.747 Got JSON-RPC error response 00:23:36.747 response: 00:23:36.747 { 00:23:36.747 "code": -5, 00:23:36.747 "message": "Input/output error" 00:23:36.747 } 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.747 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.005 request: 00:23:37.005 { 00:23:37.005 "name": "nvme0", 00:23:37.005 "trtype": "tcp", 00:23:37.005 "traddr": "10.0.0.1", 00:23:37.005 "adrfam": "ipv4", 00:23:37.005 "trsvcid": "4420", 00:23:37.005 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:37.005 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:37.005 "prchk_reftag": false, 00:23:37.005 "prchk_guard": false, 00:23:37.005 "hdgst": false, 00:23:37.005 "ddgst": false, 00:23:37.005 "dhchap_key": "key2", 00:23:37.005 "method": "bdev_nvme_attach_controller", 00:23:37.005 "req_id": 1 00:23:37.005 } 00:23:37.005 Got JSON-RPC error response 00:23:37.005 response: 00:23:37.005 { 00:23:37.005 "code": -5, 00:23:37.005 "message": "Input/output error" 00:23:37.005 } 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.005 17:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.005 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.005 request: 00:23:37.005 { 00:23:37.005 "name": "nvme0", 00:23:37.005 "trtype": "tcp", 00:23:37.005 "traddr": "10.0.0.1", 00:23:37.005 "adrfam": "ipv4", 00:23:37.005 "trsvcid": "4420", 00:23:37.005 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:37.005 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:37.005 "prchk_reftag": false, 00:23:37.005 "prchk_guard": false, 00:23:37.005 "hdgst": false, 00:23:37.005 "ddgst": false, 00:23:37.005 "dhchap_key": "key1", 00:23:37.005 "dhchap_ctrlr_key": "ckey2", 00:23:37.006 "method": "bdev_nvme_attach_controller", 00:23:37.006 "req_id": 1 00:23:37.006 } 00:23:37.006 Got JSON-RPC error response 00:23:37.006 response: 00:23:37.006 { 00:23:37.006 "code": -5, 00:23:37.006 "message": "Input/output error" 00:23:37.006 } 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.006 rmmod nvme_tcp 00:23:37.006 rmmod nvme_fabrics 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2318614 ']' 00:23:37.006 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2318614 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2318614 ']' 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2318614 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2318614 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2318614' 00:23:37.264 killing process with pid 2318614 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2318614 00:23:37.264 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2318614 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.524 17:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:39.430 17:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:40.807 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:40.807 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:40.807 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:41.744 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:41.744 17:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.sHw /tmp/spdk.key-null.v8p /tmp/spdk.key-sha256.VgI /tmp/spdk.key-sha384.Qbk /tmp/spdk.key-sha512.1ns /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:41.744 17:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:43.121 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:43.121 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:43.121 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:43.121 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:43.121 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:43.121 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:43.121 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:43.121 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:43.121 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:43.121 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:43.121 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:43.121 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:43.121 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:43.121 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:43.121 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:43.121 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:43.121 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:43.121 00:23:43.121 real 0m49.295s 00:23:43.121 user 0m47.351s 00:23:43.121 sys 0m5.576s 00:23:43.121 17:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.121 17:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.121 ************************************ 00:23:43.121 END TEST nvmf_auth_host 00:23:43.121 ************************************ 00:23:43.121 17:46:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:43.121 17:46:38 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:43.121 17:46:38 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:43.121 17:46:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:43.121 17:46:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.121 17:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.121 ************************************ 00:23:43.121 START TEST nvmf_digest 00:23:43.121 ************************************ 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:43.121 * Looking for test storage... 00:23:43.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.121 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.122 17:46:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.046 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:45.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:45.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:45.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:45.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:23:45.047 00:23:45.047 --- 10.0.0.2 ping statistics --- 00:23:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.047 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:45.047 00:23:45.047 --- 10.0.0.1 ping statistics --- 00:23:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.047 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.047 17:46:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:45.307 ************************************ 00:23:45.307 START TEST nvmf_digest_clean 00:23:45.307 ************************************ 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2328168 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2328168 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2328168 ']' 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.307 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.307 [2024-07-15 17:46:40.257745] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:45.307 [2024-07-15 17:46:40.257834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.307 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.307 [2024-07-15 17:46:40.320844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.307 [2024-07-15 17:46:40.430489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.307 [2024-07-15 17:46:40.430548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.307 [2024-07-15 17:46:40.430576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.307 [2024-07-15 17:46:40.430587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.307 [2024-07-15 17:46:40.430597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.307 [2024-07-15 17:46:40.430622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.566 null0 00:23:45.566 [2024-07-15 17:46:40.609947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.566 [2024-07-15 17:46:40.634172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2328194 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2328194 /var/tmp/bperf.sock 00:23:45.566 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2328194 ']' 00:23:45.567 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.567 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.567 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.567 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.567 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:45.567 [2024-07-15 17:46:40.682151] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:45.567 [2024-07-15 17:46:40.682241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328194 ] 00:23:45.824 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.824 [2024-07-15 17:46:40.744197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.824 [2024-07-15 17:46:40.860322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.824 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.824 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:45.824 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:45.824 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:45.824 17:46:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:46.391 17:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.391 17:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.650 nvme0n1 00:23:46.650 17:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:46.650 17:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:46.909 Running I/O for 2 seconds... 00:23:48.830 00:23:48.830 Latency(us) 00:23:48.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.830 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:48.830 nvme0n1 : 2.01 18990.78 74.18 0.00 0.00 6730.85 3568.07 14466.47 00:23:48.830 =================================================================================================================== 00:23:48.830 Total : 18990.78 74.18 0.00 0.00 6730.85 3568.07 14466.47 00:23:48.830 0 00:23:48.830 17:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:48.830 17:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:48.830 17:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:48.830 17:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:48.830 17:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:48.830 | select(.opcode=="crc32c") 00:23:48.830 | "\(.module_name) \(.executed)"' 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2328194 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2328194 ']' 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2328194 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328194 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328194' 00:23:49.089 killing process with pid 2328194 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2328194 00:23:49.089 Received shutdown signal, test time was about 2.000000 seconds 00:23:49.089 00:23:49.089 Latency(us) 00:23:49.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.089 =================================================================================================================== 00:23:49.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.089 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2328194 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2328606 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2328606 /var/tmp/bperf.sock 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2328606 ']' 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:49.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.348 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:49.605 [2024-07-15 17:46:44.485114] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:49.605 [2024-07-15 17:46:44.485207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328606 ] 00:23:49.605 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:49.605 Zero copy mechanism will not be used. 00:23:49.605 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.605 [2024-07-15 17:46:44.548488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.605 [2024-07-15 17:46:44.665634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.605 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.605 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:49.605 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:49.605 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:49.605 17:46:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:50.173 17:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:50.173 17:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:50.434 nvme0n1 00:23:50.434 17:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:50.434 17:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:50.434 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:50.434 Zero copy mechanism will not be used. 00:23:50.434 Running I/O for 2 seconds... 00:23:53.003 00:23:53.003 Latency(us) 00:23:53.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.003 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:53.003 nvme0n1 : 2.00 2933.71 366.71 0.00 0.00 5449.45 4951.61 13398.47 00:23:53.003 =================================================================================================================== 00:23:53.003 Total : 2933.71 366.71 0.00 0.00 5449.45 4951.61 13398.47 00:23:53.003 0 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:53.003 | select(.opcode=="crc32c") 00:23:53.003 | "\(.module_name) \(.executed)"' 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2328606 00:23:53.003 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2328606 ']' 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2328606 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328606 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328606' 00:23:53.004 killing process with pid 2328606 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2328606 00:23:53.004 Received shutdown signal, test time was about 2.000000 seconds 00:23:53.004 00:23:53.004 Latency(us) 00:23:53.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.004 =================================================================================================================== 00:23:53.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.004 17:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2328606 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2329127 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2329127 /var/tmp/bperf.sock 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2329127 ']' 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:53.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.004 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:53.004 [2024-07-15 17:46:48.100835] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:53.004 [2024-07-15 17:46:48.100937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329127 ] 00:23:53.004 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.261 [2024-07-15 17:46:48.164688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.261 [2024-07-15 17:46:48.284506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.261 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.261 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:53.261 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:53.261 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:53.261 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:53.831 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.831 17:46:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.090 nvme0n1 00:23:54.090 17:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:54.090 17:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.350 Running I/O for 2 seconds... 00:23:56.258 00:23:56.258 Latency(us) 00:23:56.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.258 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:56.258 nvme0n1 : 2.01 18815.13 73.50 0.00 0.00 6786.97 6019.60 14854.83 00:23:56.258 =================================================================================================================== 00:23:56.258 Total : 18815.13 73.50 0.00 0.00 6786.97 6019.60 14854.83 00:23:56.258 0 00:23:56.258 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:56.258 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:56.258 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:56.258 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:56.258 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:56.258 | select(.opcode=="crc32c") 00:23:56.258 | "\(.module_name) \(.executed)"' 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2329127 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2329127 ']' 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2329127 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329127 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.517 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.518 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329127' 00:23:56.518 killing process with pid 2329127 00:23:56.518 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2329127 00:23:56.518 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.518 00:23:56.518 Latency(us) 00:23:56.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.518 =================================================================================================================== 00:23:56.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.518 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2329127 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2329542 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2329542 /var/tmp/bperf.sock 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2329542 ']' 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.777 17:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.777 [2024-07-15 17:46:51.885682] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:23:56.777 [2024-07-15 17:46:51.885774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329542 ] 00:23:56.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:56.777 Zero copy mechanism will not be used. 00:23:57.036 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.036 [2024-07-15 17:46:51.944943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.036 [2024-07-15 17:46:52.051977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.036 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.036 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:57.036 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:57.036 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:57.037 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:57.605 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.605 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.864 nvme0n1 00:23:57.864 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:57.864 17:46:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:57.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.864 Zero copy mechanism will not be used. 00:23:57.864 Running I/O for 2 seconds... 00:24:00.422 00:24:00.422 Latency(us) 00:24:00.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.422 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:00.422 nvme0n1 : 2.01 2116.13 264.52 0.00 0.00 7502.79 4975.88 11165.39 00:24:00.422 =================================================================================================================== 00:24:00.422 Total : 2116.13 264.52 0.00 0.00 7502.79 4975.88 11165.39 00:24:00.422 0 00:24:00.422 17:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:00.422 17:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:00.422 17:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:00.422 17:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:00.422 17:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:00.422 | select(.opcode=="crc32c") 00:24:00.422 | "\(.module_name) \(.executed)"' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2329542 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2329542 ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2329542 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329542 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329542' 00:24:00.422 killing process with pid 2329542 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2329542 00:24:00.422 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.422 00:24:00.422 Latency(us) 00:24:00.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.422 =================================================================================================================== 00:24:00.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2329542 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2328168 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2328168 ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2328168 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2328168 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2328168' 00:24:00.422 killing process with pid 2328168 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2328168 00:24:00.422 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2328168 00:24:00.679 00:24:00.679 real 0m15.543s 00:24:00.679 user 0m29.770s 00:24:00.679 sys 0m3.788s 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.679 ************************************ 00:24:00.679 END TEST nvmf_digest_clean 00:24:00.679 ************************************ 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.679 ************************************ 00:24:00.679 START TEST nvmf_digest_error 00:24:00.679 ************************************ 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2329978 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2329978 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2329978 ']' 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.679 17:46:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:00.936 [2024-07-15 17:46:55.850136] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:00.936 [2024-07-15 17:46:55.850232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.936 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.937 [2024-07-15 17:46:55.914861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.937 [2024-07-15 17:46:56.022059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.937 [2024-07-15 17:46:56.022117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.937 [2024-07-15 17:46:56.022130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.937 [2024-07-15 17:46:56.022142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.937 [2024-07-15 17:46:56.022168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.937 [2024-07-15 17:46:56.022193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.937 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.937 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:00.937 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.937 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.937 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.194 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.194 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:01.194 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.194 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.194 [2024-07-15 17:46:56.086732] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:01.194 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.195 null0 00:24:01.195 [2024-07-15 17:46:56.206529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.195 [2024-07-15 17:46:56.230754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2330117 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2330117 /var/tmp/bperf.sock 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2330117 ']' 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.195 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.195 [2024-07-15 17:46:56.278259] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:01.195 [2024-07-15 17:46:56.278319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330117 ] 00:24:01.195 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.452 [2024-07-15 17:46:56.339252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.452 [2024-07-15 17:46:56.455185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.452 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.452 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:01.452 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:01.452 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.017 17:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.275 nvme0n1 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:02.275 17:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.533 Running I/O for 2 seconds... 00:24:02.533 [2024-07-15 17:46:57.449329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.449379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.449401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.466280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.466317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.466336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.477716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.477751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.477771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.494112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.494142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.494158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.507199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.507233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.507254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.520741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.520803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.533874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.533931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.533948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.546862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.546904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.546939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.562189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.562236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.562255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.574832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.574884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.574921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.589280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.589314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.589333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.601854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.601896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.601932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.615968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.632076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.632106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.632124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.644583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.644622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.644643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.533 [2024-07-15 17:46:57.657028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.533 [2024-07-15 17:46:57.657058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.533 [2024-07-15 17:46:57.657074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.671711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.671745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.671764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.685447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.685482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.685501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.697743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.697777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.697795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.712213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.712247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.712265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.727133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.727182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.727201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.739326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.739360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.739379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.753751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.753786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.753805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.768445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.768480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.768498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.780950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.780979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.780994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.794587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.794640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.810826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.810861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.793 [2024-07-15 17:46:57.810890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.793 [2024-07-15 17:46:57.823073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.793 [2024-07-15 17:46:57.823102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.823118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.839050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.839080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.839096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.852960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.852991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.853009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.864259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.864293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.864312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.877948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.877996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.891890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.891923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.904860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.904937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:02.794 [2024-07-15 17:46:57.920753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:02.794 [2024-07-15 17:46:57.920786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.794 [2024-07-15 17:46:57.920805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:57.932755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:57.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:57.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:57.948351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:57.948385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:57.948404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:57.963426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:57.963460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:57.963479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:57.976390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:57.976425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:57.976445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:57.992548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:57.992582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:57.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.006549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.006583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.006602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.019251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.019285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.034412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.034446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.034465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.047818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.047852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.047871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.063501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.063536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.063555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.075081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.075113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.075130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.088919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.088950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.088967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.102144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.102192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.102212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.117819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.117853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.117888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.128852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.128893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.146260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.146313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.158674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.158708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.158726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.171377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.171410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.171429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.054 [2024-07-15 17:46:58.185411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.054 [2024-07-15 17:46:58.185446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.054 [2024-07-15 17:46:58.185465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.198015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.198046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.198063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.211092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.211127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.211146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.226138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.226211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.239361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.239402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.239422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.253484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.253519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.264979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.265008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.265023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.279640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.279675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.279695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.293928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.293959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.293978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.307972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.308000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.308016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.320890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.320937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.320954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.334400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.334434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.334452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.348800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.348835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.348853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.360886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.314 [2024-07-15 17:46:58.360941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.314 [2024-07-15 17:46:58.360956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.314 [2024-07-15 17:46:58.375307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.375342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.375360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.315 [2024-07-15 17:46:58.389729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.389781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.315 [2024-07-15 17:46:58.401989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.402016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.402031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.315 [2024-07-15 17:46:58.417390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.417442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.315 [2024-07-15 17:46:58.431175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.431221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.431241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.315 [2024-07-15 17:46:58.443989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.315 [2024-07-15 17:46:58.444019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.315 [2024-07-15 17:46:58.444035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.575 [2024-07-15 17:46:58.456934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.456966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.456983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.470150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.470201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.470219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.484467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.484500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.484518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.497888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.497927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.497962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.511224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.511258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.511276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.524742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.524775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.524793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.539519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.539554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.539572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.551967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.551997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.552014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.568042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.568072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.568089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.583446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.583480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.583499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.595725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.595758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.595776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.611249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.611295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.611314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.625398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.625450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.637501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.637535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.637554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.652158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.652189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.652220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.667970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.668000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.668017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.680107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.680136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.695060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.695092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.695108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.576 [2024-07-15 17:46:58.709382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.576 [2024-07-15 17:46:58.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.576 [2024-07-15 17:46:58.709441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.722419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.722453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.722472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.737571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.737605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.737623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.751178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.751230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.762144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.762189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.762205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.775587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.775617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.775634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.786586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.786615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.801581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.801628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.816175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.816206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.816223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.826813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.826848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.826890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.839716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.839747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.839764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.852488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.852517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.852533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.865160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.865191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.865207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.878759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.878790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.878808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.890460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.890492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.890509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.903041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.903071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.913779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.913823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.913839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.926847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.940479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.940510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.838 [2024-07-15 17:46:58.940526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.838 [2024-07-15 17:46:58.952327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.838 [2024-07-15 17:46:58.952358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.839 [2024-07-15 17:46:58.952375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:03.839 [2024-07-15 17:46:58.965567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:03.839 [2024-07-15 17:46:58.965610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.839 [2024-07-15 17:46:58.965626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.099 [2024-07-15 17:46:58.979608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.099 [2024-07-15 17:46:58.979640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.099 [2024-07-15 17:46:58.979656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.099 [2024-07-15 17:46:58.992134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.099 [2024-07-15 17:46:58.992164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.099 [2024-07-15 17:46:58.992181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.099 [2024-07-15 17:46:59.004672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.099 [2024-07-15 17:46:59.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.017651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.017683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.017699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.030329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.030360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.030377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.042201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.042232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.042258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.054328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.054359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.067146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.067194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.079523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.079569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.079586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.094233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.094265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.094283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.105574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.105604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.105620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.119332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.119363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.131112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.131143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.131160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.144197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.144227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.144258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.157068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.157099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.168117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.168147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.168165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.181557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.181588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.181605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.193507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.193536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.193552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.205124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.205155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.205185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.219539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.219567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.219582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.100 [2024-07-15 17:46:59.232485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.100 [2024-07-15 17:46:59.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.100 [2024-07-15 17:46:59.232533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.359 [2024-07-15 17:46:59.244253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.359 [2024-07-15 17:46:59.244285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.359 [2024-07-15 17:46:59.244303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.359 [2024-07-15 17:46:59.257118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.257150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.257190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.270853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.270892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.270911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.281445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.281475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.281492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.295977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.296008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.296024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.307797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.307828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.307844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.319659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.319707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.331473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.331501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.331517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.345117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.345147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.345163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.356656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.356701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.356718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.372353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.372390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.372423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.383163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.383194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.396593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.396625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.396656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.409698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.409727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.409743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.422254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.422284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.422301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 [2024-07-15 17:46:59.433116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x250ed50) 00:24:04.360 [2024-07-15 17:46:59.433156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.360 [2024-07-15 17:46:59.433173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.360 00:24:04.360 Latency(us) 00:24:04.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.360 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:04.360 nvme0n1 : 2.00 18996.54 74.21 0.00 0.00 6729.50 3131.16 19029.71 00:24:04.360 =================================================================================================================== 00:24:04.360 Total : 18996.54 74.21 0.00 0.00 6729.50 3131.16 19029.71 00:24:04.360 0 00:24:04.360 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:04.360 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:04.360 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:04.360 | .driver_specific 00:24:04.360 | .nvme_error 00:24:04.360 | .status_code 00:24:04.360 | .command_transient_transport_error' 00:24:04.360 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2330117 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2330117 ']' 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2330117 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330117 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330117' 00:24:04.619 killing process with pid 2330117 00:24:04.619 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2330117 00:24:04.619 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.619 00:24:04.619 Latency(us) 00:24:04.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.619 =================================================================================================================== 00:24:04.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.620 17:46:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2330117 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2330531 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2330531 /var/tmp/bperf.sock 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2330531 ']' 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.190 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.190 [2024-07-15 17:47:00.060167] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:05.190 [2024-07-15 17:47:00.060250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330531 ] 00:24:05.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.190 Zero copy mechanism will not be used. 00:24:05.190 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.190 [2024-07-15 17:47:00.121425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.190 [2024-07-15 17:47:00.233791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.448 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.448 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:05.448 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:05.448 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.705 17:47:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.963 nvme0n1 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:05.963 17:47:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.223 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.224 Zero copy mechanism will not be used. 00:24:06.224 Running I/O for 2 seconds... 00:24:06.224 [2024-07-15 17:47:01.132093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.132155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.132175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.142662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.142700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.142726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.152967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.153001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.153025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.163118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.163149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.173218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.173264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.173284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.183642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.183675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.194226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.194261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.194280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.205565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.205599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.205619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.216954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.216985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.217004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.228682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.228737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.240138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.240183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.240204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.251814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.251849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.251868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.263037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.263074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.274246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.274281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.274300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.285744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.285777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.285796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.297158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.297204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.297229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.308560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.308595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.308614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.319803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.319838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.319857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.330943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.330974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.330993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.342113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.342141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.342181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.224 [2024-07-15 17:47:01.353322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.224 [2024-07-15 17:47:01.353356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.224 [2024-07-15 17:47:01.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.481 [2024-07-15 17:47:01.365479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.481 [2024-07-15 17:47:01.365513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.481 [2024-07-15 17:47:01.365533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.376827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.376861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.376888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.387253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.387288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.387306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.398654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.398689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.398708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.409943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.409973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.409989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.420570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.420605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.420625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.431389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.431438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.431457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.442468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.442503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.442522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.453728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.453764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.453792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.465053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.465083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.465101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.476053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.476083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.476100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.487569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.487605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.487625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.498927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.498959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.498977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.510478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.510514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.510533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.521767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.521803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.521822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.533200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.533231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.533266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.544814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.544850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.544869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.555848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.555893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.555930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.567112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.567143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.567175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.578705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.578741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.578760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.589991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.590020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.590037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.601188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.601237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.482 [2024-07-15 17:47:01.612344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.482 [2024-07-15 17:47:01.612379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.482 [2024-07-15 17:47:01.612399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.624066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.624099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.624116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.635231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.635267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.635286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.646343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.646373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.657478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.657512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.657531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.668857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.668907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.668942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.680183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.680251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.691413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.691447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.691466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.702789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.702823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.702842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.713936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.713967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.713984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.725287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.725321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.725341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.736541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.736575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.736594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.747765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.747807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.759192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.759238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.759259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.770616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.770648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-07-15 17:47:01.770665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.741 [2024-07-15 17:47:01.781741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.741 [2024-07-15 17:47:01.781775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.781794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.793006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.793037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.793054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.804082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.804129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.815275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.815310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.815329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.826434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.826470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.826489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.837450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.837478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.837494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.848811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.848846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.860176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.860225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.860245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.742 [2024-07-15 17:47:01.871360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:06.742 [2024-07-15 17:47:01.871394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.742 [2024-07-15 17:47:01.871413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.882738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.882774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.882793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.894029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.894059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.894076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.905157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.905188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.905222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.916321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.916365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.916384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.927459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.927488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.927521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.938815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.938849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.938887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.950099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.950130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.961317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.961350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.961369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.972538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.972573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.972592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.983845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.983887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.983924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:01.994886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:01.994940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:01.994957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.002 [2024-07-15 17:47:02.006048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.002 [2024-07-15 17:47:02.006079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.002 [2024-07-15 17:47:02.006096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.017295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.017330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.017350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.028682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.028716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.028735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.040007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.040037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.040053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.051321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.051356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.051375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.062442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.062477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.062497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.073709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.073745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.073763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.084969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.085000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.085017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.096158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.096188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.096222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.107493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.107528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.118585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.118620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.118638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.003 [2024-07-15 17:47:02.129818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.003 [2024-07-15 17:47:02.129853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.003 [2024-07-15 17:47:02.129888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.141338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.141374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.152785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.152820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.152840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.164005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.164035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.164052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.175221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.175269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.175288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.186342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.186376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.186396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.197488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.197522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.197542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.208591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.208625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.208645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.219678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.219711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.219730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.230793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.230834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.230854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.242152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.242200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.242219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.253335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.253369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.253388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.264610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.264642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.264661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.275679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.275734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.286987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.287034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.298221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.298255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.298274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.309269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.309302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.309321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.320518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.320552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.320571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.331837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.331871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.331898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.343243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.343278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.343297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.354441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.354477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.365717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.365752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.365771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.376944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.376975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.376992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.262 [2024-07-15 17:47:02.388249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.262 [2024-07-15 17:47:02.388284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.262 [2024-07-15 17:47:02.388304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.399676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.399711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.399731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.411790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.411827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.411847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.423999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.424042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.424060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.435265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.435315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.446472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.446507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.446527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.457766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.457800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.457820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.469119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.469150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.469168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.549 [2024-07-15 17:47:02.480389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.549 [2024-07-15 17:47:02.480424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.549 [2024-07-15 17:47:02.480443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.491464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.491498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.502687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.502720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.502739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.513807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.513841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.513860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.525021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.525051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.525068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.536386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.536441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.547830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.547865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.547896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.559205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.559238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.559258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.570379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.570413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.570432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.581497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.581531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.581550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.592357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.592392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.592411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.603698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.603750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.615115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.615146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.615188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.626363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.626397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.626416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.637571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.637624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.648656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.648689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.648708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.550 [2024-07-15 17:47:02.660218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.550 [2024-07-15 17:47:02.660251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.550 [2024-07-15 17:47:02.660270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.671782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.671816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.671836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.682992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.683021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.683038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.694345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.694379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.694399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.705536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.705570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.705589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.716726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.716767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.716787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.727895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.727942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.727959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.739194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.739242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.739261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.750251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.750285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.761430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.761464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.761483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.772684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.772719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.772738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.783993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.784024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.784040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.795323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.795359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.806865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.806923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.806942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.818271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.818306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.818326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.829474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.829509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.829528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.840742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.840776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.840796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.851989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.852020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.863351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.863382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.863399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.874674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.874710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.874728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.886042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.809 [2024-07-15 17:47:02.886072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.809 [2024-07-15 17:47:02.886089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.809 [2024-07-15 17:47:02.897353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.810 [2024-07-15 17:47:02.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.810 [2024-07-15 17:47:02.897406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.810 [2024-07-15 17:47:02.908661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.810 [2024-07-15 17:47:02.908705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.810 [2024-07-15 17:47:02.908724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.810 [2024-07-15 17:47:02.919860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.810 [2024-07-15 17:47:02.919903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.810 [2024-07-15 17:47:02.919936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.810 [2024-07-15 17:47:02.931186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.810 [2024-07-15 17:47:02.931220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.810 [2024-07-15 17:47:02.931239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.810 [2024-07-15 17:47:02.942430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:07.810 [2024-07-15 17:47:02.942464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.810 [2024-07-15 17:47:02.942482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:02.953656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:02.953690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:02.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:02.964964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:02.964994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:02.965012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:02.976107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:02.976151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:02.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:02.987493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:02.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:02.987545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:02.999084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:02.999113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:02.999130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.010182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.010216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.010236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.021340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.021375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.032412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.032447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.032466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.044037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.044067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.044084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.055277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.055311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.055341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.066406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.066454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.066473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.077813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.077866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.088943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.088974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.088990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.100143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.100225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.111349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.111412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.069 [2024-07-15 17:47:03.122510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e0e4f0) 00:24:08.069 [2024-07-15 17:47:03.122544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.069 [2024-07-15 17:47:03.122574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.070 00:24:08.070 Latency(us) 00:24:08.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.070 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:08.070 nvme0n1 : 2.00 2760.75 345.09 0.00 0.00 5791.04 4611.79 12330.48 00:24:08.070 =================================================================================================================== 00:24:08.070 Total : 2760.75 345.09 0.00 0.00 5791.04 4611.79 12330.48 00:24:08.070 0 00:24:08.070 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:08.070 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:08.070 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:08.070 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:08.070 | .driver_specific 00:24:08.070 | .nvme_error 00:24:08.070 | .status_code 00:24:08.070 | .command_transient_transport_error' 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 178 > 0 )) 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2330531 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2330531 ']' 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2330531 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330531 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330531' 00:24:08.329 killing process with pid 2330531 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2330531 00:24:08.329 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.329 00:24:08.329 Latency(us) 00:24:08.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.329 =================================================================================================================== 00:24:08.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.329 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2330531 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2330943 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2330943 /var/tmp/bperf.sock 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2330943 ']' 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.587 17:47:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.845 [2024-07-15 17:47:03.738487] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:08.845 [2024-07-15 17:47:03.738580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330943 ] 00:24:08.845 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.845 [2024-07-15 17:47:03.800523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.845 [2024-07-15 17:47:03.915201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.103 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.103 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:09.103 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.103 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.361 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.361 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.361 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.362 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.362 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.362 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.619 nvme0n1 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.878 17:47:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.878 Running I/O for 2 seconds... 00:24:09.878 [2024-07-15 17:47:04.889291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f8618 00:24:09.878 [2024-07-15 17:47:04.890233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.890276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.901569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e01f8 00:24:09.878 [2024-07-15 17:47:04.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.902558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.914951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ff3c8 00:24:09.878 [2024-07-15 17:47:04.916013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.916042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.928310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e0630 00:24:09.878 [2024-07-15 17:47:04.929550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.929581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.941574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e1b48 00:24:09.878 [2024-07-15 17:47:04.943007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.943034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.953414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fd640 00:24:09.878 [2024-07-15 17:47:04.954362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.954393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.965872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190dece0 00:24:09.878 [2024-07-15 17:47:04.966810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.966840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.978625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ddc00 00:24:09.878 [2024-07-15 17:47:04.979599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.979634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:04.991234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fa7d8 00:24:09.878 [2024-07-15 17:47:04.992139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:04.992170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:09.878 [2024-07-15 17:47:05.003896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb8b8 00:24:09.878 [2024-07-15 17:47:05.004827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:09.878 [2024-07-15 17:47:05.004858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.138 [2024-07-15 17:47:05.016585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fc998 00:24:10.138 [2024-07-15 17:47:05.017562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.138 [2024-07-15 17:47:05.017593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.029483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e01f8 00:24:10.139 [2024-07-15 17:47:05.030465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.030497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.042304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e12d8 00:24:10.139 [2024-07-15 17:47:05.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.043255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.054973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e23b8 00:24:10.139 [2024-07-15 17:47:05.055855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.055895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.067613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed4e8 00:24:10.139 [2024-07-15 17:47:05.068573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.068604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.080423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ee5c8 00:24:10.139 [2024-07-15 17:47:05.081370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.081400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.093133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ef6a8 00:24:10.139 [2024-07-15 17:47:05.094077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.094119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.105966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f0788 00:24:10.139 [2024-07-15 17:47:05.106840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.106870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.118650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f46d0 00:24:10.139 [2024-07-15 17:47:05.119565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.119596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.131230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ea248 00:24:10.139 [2024-07-15 17:47:05.132099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.132125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.143855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eb328 00:24:10.139 [2024-07-15 17:47:05.144749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.144779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.156584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ec408 00:24:10.139 [2024-07-15 17:47:05.157494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.157526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.169102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fd208 00:24:10.139 [2024-07-15 17:47:05.170058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.170084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.181533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190de8a8 00:24:10.139 [2024-07-15 17:47:05.182422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.182453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.194008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e27f0 00:24:10.139 [2024-07-15 17:47:05.194901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.194942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.206810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f1ca0 00:24:10.139 [2024-07-15 17:47:05.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.207670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.219672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ef270 00:24:10.139 [2024-07-15 17:47:05.220593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.220623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.232958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f9f68 00:24:10.139 [2024-07-15 17:47:05.234050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.234079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.244706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f92c0 00:24:10.139 [2024-07-15 17:47:05.246904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.256686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e9e10 00:24:10.139 [2024-07-15 17:47:05.257613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.257644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.139 [2024-07-15 17:47:05.269357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fc998 00:24:10.139 [2024-07-15 17:47:05.270359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.139 [2024-07-15 17:47:05.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.282058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb8b8 00:24:10.400 [2024-07-15 17:47:05.282947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.282977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.294691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f2948 00:24:10.400 [2024-07-15 17:47:05.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.295655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.307190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f1868 00:24:10.400 [2024-07-15 17:47:05.308141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.308176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.319887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f57b0 00:24:10.400 [2024-07-15 17:47:05.320807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.320837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.332692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e8088 00:24:10.400 [2024-07-15 17:47:05.333640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.333671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.345465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e6fa8 00:24:10.400 [2024-07-15 17:47:05.346384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.346414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.358120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f8e88 00:24:10.400 [2024-07-15 17:47:05.359103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.359128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.370758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7da8 00:24:10.400 [2024-07-15 17:47:05.371676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.371706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.382453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eff18 00:24:10.400 [2024-07-15 17:47:05.383382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.383412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.395773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e7c50 00:24:10.400 [2024-07-15 17:47:05.396886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.396941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.409900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7100 00:24:10.400 [2024-07-15 17:47:05.411210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.411243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.422528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f81e0 00:24:10.400 [2024-07-15 17:47:05.423859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.423899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.435164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb048 00:24:10.400 [2024-07-15 17:47:05.436483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.436513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.447760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f9f68 00:24:10.400 [2024-07-15 17:47:05.449096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.449122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.460440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190de470 00:24:10.400 [2024-07-15 17:47:05.461712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.461741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.473029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190df550 00:24:10.400 [2024-07-15 17:47:05.474305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.474335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.485649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e0630 00:24:10.400 [2024-07-15 17:47:05.486983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.487025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.498370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e1710 00:24:10.400 [2024-07-15 17:47:05.499663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.499692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.510842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e8d30 00:24:10.400 [2024-07-15 17:47:05.512170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.512200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.400 [2024-07-15 17:47:05.523542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ebfd0 00:24:10.400 [2024-07-15 17:47:05.524866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.400 [2024-07-15 17:47:05.524903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.536246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fd640 00:24:10.661 [2024-07-15 17:47:05.537550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.537579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.548836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f5be8 00:24:10.661 [2024-07-15 17:47:05.550139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.550179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.561506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e7c50 00:24:10.661 [2024-07-15 17:47:05.562765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.562795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.574146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e6b70 00:24:10.661 [2024-07-15 17:47:05.575476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.575506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.586712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fe720 00:24:10.661 [2024-07-15 17:47:05.588052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.588082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.599468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e2c28 00:24:10.661 [2024-07-15 17:47:05.600790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.600820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.612135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f6cc8 00:24:10.661 [2024-07-15 17:47:05.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.613479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.624868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7da8 00:24:10.661 [2024-07-15 17:47:05.626197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.626237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.637669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f8e88 00:24:10.661 [2024-07-15 17:47:05.638997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.639044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.650489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fa3a0 00:24:10.661 [2024-07-15 17:47:05.651821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.651852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.663220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190de038 00:24:10.661 [2024-07-15 17:47:05.664514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.675744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190df118 00:24:10.661 [2024-07-15 17:47:05.677097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.677123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.688486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e01f8 00:24:10.661 [2024-07-15 17:47:05.689756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.689786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.701174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e12d8 00:24:10.661 [2024-07-15 17:47:05.702479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.661 [2024-07-15 17:47:05.702509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.661 [2024-07-15 17:47:05.713835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e23b8 00:24:10.662 [2024-07-15 17:47:05.715174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.715204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.726558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed4e8 00:24:10.662 [2024-07-15 17:47:05.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.727900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.739244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ec408 00:24:10.662 [2024-07-15 17:47:05.740537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.740567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.751786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f0ff8 00:24:10.662 [2024-07-15 17:47:05.753120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.753147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.764480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f6020 00:24:10.662 [2024-07-15 17:47:05.765740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.765770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.777079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e7818 00:24:10.662 [2024-07-15 17:47:05.778368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.662 [2024-07-15 17:47:05.789667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f96f8 00:24:10.662 [2024-07-15 17:47:05.791006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.662 [2024-07-15 17:47:05.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.922 [2024-07-15 17:47:05.802508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed0b0 00:24:10.922 [2024-07-15 17:47:05.803777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.922 [2024-07-15 17:47:05.803807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.922 [2024-07-15 17:47:05.815129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e3060 00:24:10.922 [2024-07-15 17:47:05.816431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.922 [2024-07-15 17:47:05.816460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.922 [2024-07-15 17:47:05.827813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7100 00:24:10.922 [2024-07-15 17:47:05.829131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.922 [2024-07-15 17:47:05.829173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.922 [2024-07-15 17:47:05.840513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f81e0 00:24:10.922 [2024-07-15 17:47:05.841816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.922 [2024-07-15 17:47:05.841846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.922 [2024-07-15 17:47:05.853215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb048 00:24:10.923 [2024-07-15 17:47:05.854481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.854511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.865735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f9f68 00:24:10.923 [2024-07-15 17:47:05.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.867098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.878435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190de470 00:24:10.923 [2024-07-15 17:47:05.879911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.879955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.891272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190df550 00:24:10.923 [2024-07-15 17:47:05.892569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.892600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.903867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e0630 00:24:10.923 [2024-07-15 17:47:05.905199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.916523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e1710 00:24:10.923 [2024-07-15 17:47:05.917837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.917868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.929233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e8d30 00:24:10.923 [2024-07-15 17:47:05.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.941777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ebfd0 00:24:10.923 [2024-07-15 17:47:05.943120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.943163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.954519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fd640 00:24:10.923 [2024-07-15 17:47:05.955833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.955864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.967148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f5be8 00:24:10.923 [2024-07-15 17:47:05.968427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.968462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.979763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e7c50 00:24:10.923 [2024-07-15 17:47:05.981088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.981114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:05.992462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e6b70 00:24:10.923 [2024-07-15 17:47:05.993754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:05.993783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:06.005149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fe720 00:24:10.923 [2024-07-15 17:47:06.006459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:06.006489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:06.017708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e2c28 00:24:10.923 [2024-07-15 17:47:06.019002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:06.019028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:06.030408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f6cc8 00:24:10.923 [2024-07-15 17:47:06.031697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:06.031726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:06.043024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7da8 00:24:10.923 [2024-07-15 17:47:06.044294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:06.044324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.923 [2024-07-15 17:47:06.055713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f8e88 00:24:10.923 [2024-07-15 17:47:06.057058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.923 [2024-07-15 17:47:06.057086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.068556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fa3a0 00:24:11.183 [2024-07-15 17:47:06.069834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.069865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.081255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190de038 00:24:11.183 [2024-07-15 17:47:06.082546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.082577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.093869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190df118 00:24:11.183 [2024-07-15 17:47:06.095213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.095243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.106556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e01f8 00:24:11.183 [2024-07-15 17:47:06.107861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.107899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.119195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e12d8 00:24:11.183 [2024-07-15 17:47:06.120482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.120513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.131781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e23b8 00:24:11.183 [2024-07-15 17:47:06.133100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.133126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.144483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed4e8 00:24:11.183 [2024-07-15 17:47:06.145743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.145773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.157233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ec408 00:24:11.183 [2024-07-15 17:47:06.158545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.158576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.169857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f0ff8 00:24:11.183 [2024-07-15 17:47:06.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.171194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.182535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f6020 00:24:11.183 [2024-07-15 17:47:06.183807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.183838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.195166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e7818 00:24:11.183 [2024-07-15 17:47:06.196450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.196478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.207671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f96f8 00:24:11.183 [2024-07-15 17:47:06.209008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.209036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.220504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed0b0 00:24:11.183 [2024-07-15 17:47:06.221791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.221821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.233131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e3060 00:24:11.183 [2024-07-15 17:47:06.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.245723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f7100 00:24:11.183 [2024-07-15 17:47:06.247075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.247101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.258487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f81e0 00:24:11.183 [2024-07-15 17:47:06.259762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.259792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.271179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb048 00:24:11.183 [2024-07-15 17:47:06.272481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.272510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.283755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f9f68 00:24:11.183 [2024-07-15 17:47:06.285092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.285118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.295607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ef270 00:24:11.183 [2024-07-15 17:47:06.296887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.296938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.183 [2024-07-15 17:47:06.308875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ee5c8 00:24:11.183 [2024-07-15 17:47:06.310327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.183 [2024-07-15 17:47:06.310358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.320802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ebfd0 00:24:11.442 [2024-07-15 17:47:06.321788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.333347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ff3c8 00:24:11.442 [2024-07-15 17:47:06.334271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.334300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.345954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e6fa8 00:24:11.442 [2024-07-15 17:47:06.346932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.346958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.358702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e8088 00:24:11.442 [2024-07-15 17:47:06.359647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.371307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f57b0 00:24:11.442 [2024-07-15 17:47:06.372232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.372262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.383954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fda78 00:24:11.442 [2024-07-15 17:47:06.384932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.384958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.396658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eff18 00:24:11.442 [2024-07-15 17:47:06.397608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.397638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.409319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eee38 00:24:11.442 [2024-07-15 17:47:06.410253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.410283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.421930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190edd58 00:24:11.442 [2024-07-15 17:47:06.422835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.422867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.434641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eaab8 00:24:11.442 [2024-07-15 17:47:06.435569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.435599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.447267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f3e60 00:24:11.442 [2024-07-15 17:47:06.448179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.448220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.459888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f4f40 00:24:11.442 [2024-07-15 17:47:06.460822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.460851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.472612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190feb58 00:24:11.442 [2024-07-15 17:47:06.473553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.473583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.485331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e99d8 00:24:11.442 [2024-07-15 17:47:06.486323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.486353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.497934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e12d8 00:24:11.442 [2024-07-15 17:47:06.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.498862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.510611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e23b8 00:24:11.442 [2024-07-15 17:47:06.511536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.511566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.523302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed4e8 00:24:11.442 [2024-07-15 17:47:06.524227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.524257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.535894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ec408 00:24:11.442 [2024-07-15 17:47:06.536820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.536849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.548595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f92c0 00:24:11.442 [2024-07-15 17:47:06.549535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.549566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.562813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e73e0 00:24:11.442 [2024-07-15 17:47:06.564406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.442 [2024-07-15 17:47:06.564436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.442 [2024-07-15 17:47:06.576158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f6cc8 00:24:11.701 [2024-07-15 17:47:06.578012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.578040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.588052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fe2e8 00:24:11.701 [2024-07-15 17:47:06.589303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.589334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.600488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fc128 00:24:11.701 [2024-07-15 17:47:06.601764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.613238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f31b8 00:24:11.701 [2024-07-15 17:47:06.614527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.614557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.625796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f20d8 00:24:11.701 [2024-07-15 17:47:06.627112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.627143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.638489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ed0b0 00:24:11.701 [2024-07-15 17:47:06.639802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.639830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.651171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fac10 00:24:11.701 [2024-07-15 17:47:06.652401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.652430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.663511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190eb328 00:24:11.701 [2024-07-15 17:47:06.664792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.664821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.676231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190ea248 00:24:11.701 [2024-07-15 17:47:06.677493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.677524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.688759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f46d0 00:24:11.701 [2024-07-15 17:47:06.690099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.690126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.701439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f35f0 00:24:11.701 [2024-07-15 17:47:06.702711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.714116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e8d30 00:24:11.701 [2024-07-15 17:47:06.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.715398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.726664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e1710 00:24:11.701 [2024-07-15 17:47:06.727983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.739345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e38d0 00:24:11.701 [2024-07-15 17:47:06.740613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.740642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.751996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fd208 00:24:11.701 [2024-07-15 17:47:06.753256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.753286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.764604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e4140 00:24:11.701 [2024-07-15 17:47:06.765899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.765942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.701 [2024-07-15 17:47:06.777318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e5220 00:24:11.701 [2024-07-15 17:47:06.778576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.701 [2024-07-15 17:47:06.778606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.702 [2024-07-15 17:47:06.789893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e6300 00:24:11.702 [2024-07-15 17:47:06.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.702 [2024-07-15 17:47:06.791205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.702 [2024-07-15 17:47:06.802555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fc560 00:24:11.702 [2024-07-15 17:47:06.803858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.702 [2024-07-15 17:47:06.803895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.702 [2024-07-15 17:47:06.815287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fb480 00:24:11.702 [2024-07-15 17:47:06.816547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.702 [2024-07-15 17:47:06.816577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.702 [2024-07-15 17:47:06.827826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f2510 00:24:11.702 [2024-07-15 17:47:06.829126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.702 [2024-07-15 17:47:06.829167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.960 [2024-07-15 17:47:06.842325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190f1430 00:24:11.960 [2024-07-15 17:47:06.844265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.960 [2024-07-15 17:47:06.844295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.960 [2024-07-15 17:47:06.854173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190e9e10 00:24:11.961 [2024-07-15 17:47:06.855664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.961 [2024-07-15 17:47:06.855695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.961 [2024-07-15 17:47:06.866538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ed76b0) with pdu=0x2000190fef90 00:24:11.961 [2024-07-15 17:47:06.868020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.961 [2024-07-15 17:47:06.868045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.961 00:24:11.961 Latency(us) 00:24:11.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.961 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:11.961 nvme0n1 : 2.00 20098.84 78.51 0.00 0.00 6357.74 2645.71 17087.91 00:24:11.961 =================================================================================================================== 00:24:11.961 Total : 20098.84 78.51 0.00 0.00 6357.74 2645.71 17087.91 00:24:11.961 0 00:24:11.961 17:47:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:11.961 17:47:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:11.961 17:47:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:11.961 17:47:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:11.961 | .driver_specific 00:24:11.961 | .nvme_error 00:24:11.961 | .status_code 00:24:11.961 | .command_transient_transport_error' 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2330943 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2330943 ']' 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2330943 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2330943 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2330943' 00:24:12.219 killing process with pid 2330943 00:24:12.219 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2330943 00:24:12.219 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.219 00:24:12.219 Latency(us) 00:24:12.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.219 =================================================================================================================== 00:24:12.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.220 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2330943 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2331462 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2331462 /var/tmp/bperf.sock 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2331462 ']' 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.478 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.479 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.479 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.479 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.479 [2024-07-15 17:47:07.482811] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:12.479 [2024-07-15 17:47:07.482897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331462 ] 00:24:12.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.479 Zero copy mechanism will not be used. 00:24:12.479 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.479 [2024-07-15 17:47:07.544454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.736 [2024-07-15 17:47:07.659324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.736 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.736 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:12.736 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.736 17:47:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.995 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.562 nvme0n1 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:13.562 17:47:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.562 Zero copy mechanism will not be used. 00:24:13.562 Running I/O for 2 seconds... 00:24:13.562 [2024-07-15 17:47:08.638787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.562 [2024-07-15 17:47:08.639197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.562 [2024-07-15 17:47:08.639235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.562 [2024-07-15 17:47:08.653481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.562 [2024-07-15 17:47:08.653922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.562 [2024-07-15 17:47:08.653953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.562 [2024-07-15 17:47:08.669325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.562 [2024-07-15 17:47:08.669676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.562 [2024-07-15 17:47:08.669706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.562 [2024-07-15 17:47:08.685472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.562 [2024-07-15 17:47:08.685872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.562 [2024-07-15 17:47:08.685909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.822 [2024-07-15 17:47:08.702045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.822 [2024-07-15 17:47:08.702397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.822 [2024-07-15 17:47:08.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.822 [2024-07-15 17:47:08.718512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.718927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.718955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.732433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.732798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.732826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.748171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.748594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.762933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.763312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.763340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.779096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.779475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.779517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.795016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.795406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.795434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.810340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.810686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.810729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.825074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.825425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.825454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.841724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.842145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.842199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.855668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.856031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.856060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.870986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.871235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.871263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.886459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.886823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.886851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.901837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.902209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.916862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.917252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.917281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.932870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.933332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.933375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.823 [2024-07-15 17:47:08.947271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:13.823 [2024-07-15 17:47:08.947634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.823 [2024-07-15 17:47:08.947677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.084 [2024-07-15 17:47:08.962199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.084 [2024-07-15 17:47:08.962563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:08.962592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:08.975975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:08.976345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:08.976372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:08.991490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:08.991853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:08.991888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.007358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.007739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.007787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.022780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.023176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.038808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.039270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.039312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.053452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.053842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.068742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.068991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.069019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.083716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.084116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.084144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.099829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.100194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.100222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.115685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.116123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.130626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.130999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.145063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.145447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.145492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.161998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.162376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.162404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.177537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.177930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.177958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.193448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.193866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.193901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.085 [2024-07-15 17:47:09.209404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.085 [2024-07-15 17:47:09.209805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.085 [2024-07-15 17:47:09.209846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.225318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.225698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.240820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.241167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.241196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.256445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.256924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.256975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.272033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.272289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.272317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.287220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.287475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.287503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.302567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.302938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.302966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.318511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.318896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.318943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.333810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.334190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.348123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.348523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.348551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.362351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.362727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.362770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.378895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.379242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.379285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.394525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.394997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.409474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.409821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.409868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.424409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.424833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.424862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.440601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.440999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.441028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.457105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.457488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.457517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.346 [2024-07-15 17:47:09.472938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.346 [2024-07-15 17:47:09.473235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.346 [2024-07-15 17:47:09.473263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.607 [2024-07-15 17:47:09.489054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.489473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.505626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.506165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.506191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.521223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.521692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.521721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.535582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.535986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.536028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.551747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.552138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.552180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.566283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.566633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.566676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.582114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.582507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.582549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.597031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.597381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.597409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.612057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.612450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.626608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.626994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.627023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.642555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.642940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.642984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.658995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.659387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.673690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.674109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.674138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.688448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.688802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.688829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.704194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.704557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.704585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.717777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.718141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.718169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.608 [2024-07-15 17:47:09.731844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.608 [2024-07-15 17:47:09.732230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.608 [2024-07-15 17:47:09.732271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.746549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.746853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.746903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.762264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.762650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.778374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.778781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.793574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.793983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.794011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.810475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.810900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.810932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.825125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.825479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.825507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.840043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.840424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.854143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.854489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.854517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.869419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.869792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.869819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.884817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.885154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.885182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.898239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.898591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.898619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.911804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.912164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.912209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.927235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.927583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.927612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.943312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.943678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.958150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.958501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.958531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.973773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.974151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.974180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.868 [2024-07-15 17:47:09.989658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:14.868 [2024-07-15 17:47:09.990039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.868 [2024-07-15 17:47:09.990069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.005652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.006070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.006118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.022099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.022489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.022522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.035840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.036200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.036232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.052261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.052638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.067456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.067807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.067860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.083594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.083955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.083985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.099553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.099908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.099962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.115272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.115687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.115729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.131099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.131472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.131500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.145419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.145710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.145738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.160574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.160943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.160971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.175475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.175821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.175864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.192074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.192471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.192514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.208315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.208733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.223980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.224350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.224380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.239986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.240336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.240380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.130 [2024-07-15 17:47:10.255579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.130 [2024-07-15 17:47:10.255943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.130 [2024-07-15 17:47:10.255973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.271255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.271616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.271644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.285718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.286177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.286220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.300934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.301286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.301330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.316275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.316655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.316699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.332334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.332727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.346972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.347338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.347366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.361763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.362129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.362158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.377844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.378236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.392845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.393202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.393230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.407415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.407763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.407806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.422721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.423075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.423118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.438012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.438475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.453014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.468001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.468376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.483084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.483434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.483464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.498135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.391 [2024-07-15 17:47:10.498546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.391 [2024-07-15 17:47:10.498575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.391 [2024-07-15 17:47:10.512683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.392 [2024-07-15 17:47:10.513040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.392 [2024-07-15 17:47:10.513070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.392 [2024-07-15 17:47:10.526259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.526626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.526655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.540721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.541110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.541155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.557799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.558151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.558180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.571908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.572273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.572300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.586756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.587130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.587157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.603847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.604264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.604292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.651 [2024-07-15 17:47:10.620041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d0caf0) with pdu=0x2000190fef90 00:24:15.651 [2024-07-15 17:47:10.620481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.651 [2024-07-15 17:47:10.620510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.651 00:24:15.651 Latency(us) 00:24:15.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.651 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:15.651 nvme0n1 : 2.01 2013.88 251.73 0.00 0.00 7926.27 6068.15 18447.17 00:24:15.651 =================================================================================================================== 00:24:15.651 Total : 2013.88 251.73 0.00 0.00 7926.27 6068.15 18447.17 00:24:15.651 0 00:24:15.651 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:15.651 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:15.651 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:15.651 | .driver_specific 00:24:15.651 | .nvme_error 00:24:15.651 | .status_code 00:24:15.651 | .command_transient_transport_error' 00:24:15.651 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2331462 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2331462 ']' 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2331462 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331462 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331462' 00:24:15.930 killing process with pid 2331462 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2331462 00:24:15.930 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.930 00:24:15.930 Latency(us) 00:24:15.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.930 =================================================================================================================== 00:24:15.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.930 17:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2331462 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2329978 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2329978 ']' 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2329978 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329978 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329978' 00:24:16.190 killing process with pid 2329978 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2329978 00:24:16.190 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2329978 00:24:16.449 00:24:16.449 real 0m15.740s 00:24:16.449 user 0m31.627s 00:24:16.449 sys 0m3.856s 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.449 ************************************ 00:24:16.449 END TEST nvmf_digest_error 00:24:16.449 ************************************ 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.449 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.449 rmmod nvme_tcp 00:24:16.708 rmmod nvme_fabrics 00:24:16.708 rmmod nvme_keyring 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2329978 ']' 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2329978 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2329978 ']' 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2329978 00:24:16.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2329978) - No such process 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2329978 is not found' 00:24:16.708 Process with pid 2329978 is not found 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.708 17:47:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.613 17:47:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.613 00:24:18.613 real 0m35.562s 00:24:18.613 user 1m2.199s 00:24:18.613 sys 0m9.095s 00:24:18.613 17:47:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.613 17:47:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 ************************************ 00:24:18.613 END TEST nvmf_digest 00:24:18.613 ************************************ 00:24:18.613 17:47:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:18.613 17:47:13 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:18.613 17:47:13 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:18.613 17:47:13 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:18.613 17:47:13 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:18.613 17:47:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.613 17:47:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.613 17:47:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 ************************************ 00:24:18.613 START TEST nvmf_bdevperf 00:24:18.613 ************************************ 00:24:18.613 17:47:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:18.871 * Looking for test storage... 00:24:18.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.871 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.872 17:47:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:20.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:20.776 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:20.776 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:20.776 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.776 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:24:20.777 00:24:20.777 --- 10.0.0.2 ping statistics --- 00:24:20.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.777 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:20.777 00:24:20.777 --- 10.0.0.1 ping statistics --- 00:24:20.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.777 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2333822 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2333822 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2333822 ']' 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.777 17:47:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.777 [2024-07-15 17:47:15.889088] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:20.777 [2024-07-15 17:47:15.889186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.040 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.040 [2024-07-15 17:47:15.953044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:21.040 [2024-07-15 17:47:16.059424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.040 [2024-07-15 17:47:16.059477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.040 [2024-07-15 17:47:16.059501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.040 [2024-07-15 17:47:16.059512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.040 [2024-07-15 17:47:16.059521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.040 [2024-07-15 17:47:16.059618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.040 [2024-07-15 17:47:16.059681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.040 [2024-07-15 17:47:16.059683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.040 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.040 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:21.040 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.040 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.040 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 [2024-07-15 17:47:16.192331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 Malloc0 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.350 [2024-07-15 17:47:16.248355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.350 { 00:24:21.350 "params": { 00:24:21.350 "name": "Nvme$subsystem", 00:24:21.350 "trtype": "$TEST_TRANSPORT", 00:24:21.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.350 "adrfam": "ipv4", 00:24:21.350 "trsvcid": "$NVMF_PORT", 00:24:21.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.350 "hdgst": ${hdgst:-false}, 00:24:21.350 "ddgst": ${ddgst:-false} 00:24:21.350 }, 00:24:21.350 "method": "bdev_nvme_attach_controller" 00:24:21.350 } 00:24:21.350 EOF 00:24:21.350 )") 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:21.350 17:47:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:21.350 "params": { 00:24:21.350 "name": "Nvme1", 00:24:21.350 "trtype": "tcp", 00:24:21.350 "traddr": "10.0.0.2", 00:24:21.350 "adrfam": "ipv4", 00:24:21.350 "trsvcid": "4420", 00:24:21.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.350 "hdgst": false, 00:24:21.350 "ddgst": false 00:24:21.350 }, 00:24:21.350 "method": "bdev_nvme_attach_controller" 00:24:21.350 }' 00:24:21.350 [2024-07-15 17:47:16.293315] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:21.350 [2024-07-15 17:47:16.293406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333846 ] 00:24:21.350 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.350 [2024-07-15 17:47:16.354752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.609 [2024-07-15 17:47:16.466590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.868 Running I/O for 1 seconds... 00:24:22.804 00:24:22.804 Latency(us) 00:24:22.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.804 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:22.804 Verification LBA range: start 0x0 length 0x4000 00:24:22.804 Nvme1n1 : 1.01 8801.53 34.38 0.00 0.00 14481.49 2487.94 18932.62 00:24:22.804 =================================================================================================================== 00:24:22.804 Total : 8801.53 34.38 0.00 0.00 14481.49 2487.94 18932.62 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2334110 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:23.063 { 00:24:23.063 "params": { 00:24:23.063 "name": "Nvme$subsystem", 00:24:23.063 "trtype": "$TEST_TRANSPORT", 00:24:23.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.063 "adrfam": "ipv4", 00:24:23.063 "trsvcid": "$NVMF_PORT", 00:24:23.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.063 "hdgst": ${hdgst:-false}, 00:24:23.063 "ddgst": ${ddgst:-false} 00:24:23.063 }, 00:24:23.063 "method": "bdev_nvme_attach_controller" 00:24:23.063 } 00:24:23.063 EOF 00:24:23.063 )") 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:23.063 17:47:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:23.063 "params": { 00:24:23.063 "name": "Nvme1", 00:24:23.063 "trtype": "tcp", 00:24:23.063 "traddr": "10.0.0.2", 00:24:23.063 "adrfam": "ipv4", 00:24:23.063 "trsvcid": "4420", 00:24:23.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.063 "hdgst": false, 00:24:23.063 "ddgst": false 00:24:23.063 }, 00:24:23.063 "method": "bdev_nvme_attach_controller" 00:24:23.063 }' 00:24:23.063 [2024-07-15 17:47:18.117932] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:23.063 [2024-07-15 17:47:18.118020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334110 ] 00:24:23.063 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.063 [2024-07-15 17:47:18.178608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.322 [2024-07-15 17:47:18.286054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.579 Running I/O for 15 seconds... 00:24:26.112 17:47:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2333822 00:24:26.112 17:47:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:26.112 [2024-07-15 17:47:21.090112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.112 [2024-07-15 17:47:21.090641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.112 [2024-07-15 17:47:21.090660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.090971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.090987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.113 [2024-07-15 17:47:21.091865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.113 [2024-07-15 17:47:21.091887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.091906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.091922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.091955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.091970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.091985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.114 [2024-07-15 17:47:21.092575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.092980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.092995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.093009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.093024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.114 [2024-07-15 17:47:21.093039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.114 [2024-07-15 17:47:21.093054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.115 [2024-07-15 17:47:21.093068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.115 [2024-07-15 17:47:21.093383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.093975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.115 [2024-07-15 17:47:21.094232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.115 [2024-07-15 17:47:21.094249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.116 [2024-07-15 17:47:21.094628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e14c0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.094664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.116 [2024-07-15 17:47:21.094681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.116 [2024-07-15 17:47:21.094700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:24:26.116 [2024-07-15 17:47:21.094715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.116 [2024-07-15 17:47:21.094788] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20e14c0 was disconnected and freed. reset controller. 00:24:26.116 [2024-07-15 17:47:21.098652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.116 [2024-07-15 17:47:21.098729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.099485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.116 [2024-07-15 17:47:21.099519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.116 [2024-07-15 17:47:21.099537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.099778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.100033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.116 [2024-07-15 17:47:21.100057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.116 [2024-07-15 17:47:21.100077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.116 [2024-07-15 17:47:21.103640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.116 [2024-07-15 17:47:21.112989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.116 [2024-07-15 17:47:21.113469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.116 [2024-07-15 17:47:21.113503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.116 [2024-07-15 17:47:21.113522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.113760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.114016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.116 [2024-07-15 17:47:21.114042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.116 [2024-07-15 17:47:21.114058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.116 [2024-07-15 17:47:21.117620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.116 [2024-07-15 17:47:21.126873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.116 [2024-07-15 17:47:21.127347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.116 [2024-07-15 17:47:21.127375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.116 [2024-07-15 17:47:21.127390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.127639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.127894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.116 [2024-07-15 17:47:21.127919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.116 [2024-07-15 17:47:21.127955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.116 [2024-07-15 17:47:21.131503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.116 [2024-07-15 17:47:21.140751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.116 [2024-07-15 17:47:21.141195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.116 [2024-07-15 17:47:21.141227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.116 [2024-07-15 17:47:21.141245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.141484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.141725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.116 [2024-07-15 17:47:21.141751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.116 [2024-07-15 17:47:21.141767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.116 [2024-07-15 17:47:21.145336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.116 [2024-07-15 17:47:21.154582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.116 [2024-07-15 17:47:21.155038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.116 [2024-07-15 17:47:21.155070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.116 [2024-07-15 17:47:21.155088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.116 [2024-07-15 17:47:21.155327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.116 [2024-07-15 17:47:21.155569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.116 [2024-07-15 17:47:21.155594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.116 [2024-07-15 17:47:21.155610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.116 [2024-07-15 17:47:21.159181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.168428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.168953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.168985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.169003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.169242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.169484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.169509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.169526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.173094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.182339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.182783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.182818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.182838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.183086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.183328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.183354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.183371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.186939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.196185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.196686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.196713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.196729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.196997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.197240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.197266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.197282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.200843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.210096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.210517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.210550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.210569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.210808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.211065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.211092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.211108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.214668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.223921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.224386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.224418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.224436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.224674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.224937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.224964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.224979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.228544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.117 [2024-07-15 17:47:21.237799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.117 [2024-07-15 17:47:21.238264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.117 [2024-07-15 17:47:21.238297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.117 [2024-07-15 17:47:21.238315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.117 [2024-07-15 17:47:21.238553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.117 [2024-07-15 17:47:21.238795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.117 [2024-07-15 17:47:21.238821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.117 [2024-07-15 17:47:21.238837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.117 [2024-07-15 17:47:21.242401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.251642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.252107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.252139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.252158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.252396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.252638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.378 [2024-07-15 17:47:21.252664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.378 [2024-07-15 17:47:21.252680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.378 [2024-07-15 17:47:21.256245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.265483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.265924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.265956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.265974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.266212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.266453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.378 [2024-07-15 17:47:21.266478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.378 [2024-07-15 17:47:21.266494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.378 [2024-07-15 17:47:21.270064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.279310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.279760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.279792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.279811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.280061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.280304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.378 [2024-07-15 17:47:21.280329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.378 [2024-07-15 17:47:21.280346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.378 [2024-07-15 17:47:21.283914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.293159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.293604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.293636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.293653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.293905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.294147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.378 [2024-07-15 17:47:21.294173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.378 [2024-07-15 17:47:21.294189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.378 [2024-07-15 17:47:21.297754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.307006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.307461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.307492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.307510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.307747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.308002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.378 [2024-07-15 17:47:21.308029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.378 [2024-07-15 17:47:21.308045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.378 [2024-07-15 17:47:21.311601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.378 [2024-07-15 17:47:21.320843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.378 [2024-07-15 17:47:21.321291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.378 [2024-07-15 17:47:21.321323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.378 [2024-07-15 17:47:21.321346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.378 [2024-07-15 17:47:21.321585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.378 [2024-07-15 17:47:21.321827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.321852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.321868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.325440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.334684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.335135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.335167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.335185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.335424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.335666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.335691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.335706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.339273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.348521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.348940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.348991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.349230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.349473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.349499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.349515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.353082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.362534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.362983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.363016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.363034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.363272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.363513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.363544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.363561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.367132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.376377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.376820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.376852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.376869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.377120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.377362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.377388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.377405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.380968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.390210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.390651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.390682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.390700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.390950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.391192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.391218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.391233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.394795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.404044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.404496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.404528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.404546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.404783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.405038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.405064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.405079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.408640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.417905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.418349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.418381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.418399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.418637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.418892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.418918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.418934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.422491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.431751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.432188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.432220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.432239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.432477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.379 [2024-07-15 17:47:21.432719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.379 [2024-07-15 17:47:21.432744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.379 [2024-07-15 17:47:21.432760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.379 [2024-07-15 17:47:21.436337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.379 [2024-07-15 17:47:21.445591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.379 [2024-07-15 17:47:21.446027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.379 [2024-07-15 17:47:21.446059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.379 [2024-07-15 17:47:21.446077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.379 [2024-07-15 17:47:21.446316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.380 [2024-07-15 17:47:21.446558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.380 [2024-07-15 17:47:21.446582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.380 [2024-07-15 17:47:21.446598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.380 [2024-07-15 17:47:21.450170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.380 [2024-07-15 17:47:21.459441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.380 [2024-07-15 17:47:21.459892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.380 [2024-07-15 17:47:21.459924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.380 [2024-07-15 17:47:21.459942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.380 [2024-07-15 17:47:21.460186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.380 [2024-07-15 17:47:21.460430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.380 [2024-07-15 17:47:21.460455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.380 [2024-07-15 17:47:21.460471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.380 [2024-07-15 17:47:21.464049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.380 [2024-07-15 17:47:21.473305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.380 [2024-07-15 17:47:21.473757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.380 [2024-07-15 17:47:21.473807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.380 [2024-07-15 17:47:21.473825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.380 [2024-07-15 17:47:21.474075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.380 [2024-07-15 17:47:21.474317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.380 [2024-07-15 17:47:21.474341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.380 [2024-07-15 17:47:21.474358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.380 [2024-07-15 17:47:21.477930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.380 [2024-07-15 17:47:21.487177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.380 [2024-07-15 17:47:21.487650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.380 [2024-07-15 17:47:21.487698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.380 [2024-07-15 17:47:21.487716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.380 [2024-07-15 17:47:21.487966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.380 [2024-07-15 17:47:21.488210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.380 [2024-07-15 17:47:21.488234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.380 [2024-07-15 17:47:21.488249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.380 [2024-07-15 17:47:21.491811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.380 [2024-07-15 17:47:21.501073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.380 [2024-07-15 17:47:21.501456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.380 [2024-07-15 17:47:21.501487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.380 [2024-07-15 17:47:21.501505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.380 [2024-07-15 17:47:21.501743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.380 [2024-07-15 17:47:21.501996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.380 [2024-07-15 17:47:21.502022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.380 [2024-07-15 17:47:21.502044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.380 [2024-07-15 17:47:21.505610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.515084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.515530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.515562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.515579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.515817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.516082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.516107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.516123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.641 [2024-07-15 17:47:21.519690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.528979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.529444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.529477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.529496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.529735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.529989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.530014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.530030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.641 [2024-07-15 17:47:21.533585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.542841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.543304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.543337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.543356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.543594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.543837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.543862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.543887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.641 [2024-07-15 17:47:21.547464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.556706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.557141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.557173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.557191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.557430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.557672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.557697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.557713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.641 [2024-07-15 17:47:21.561280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.570732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.571168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.571201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.571220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.571458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.571701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.571725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.571742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.641 [2024-07-15 17:47:21.575318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.641 [2024-07-15 17:47:21.584563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.641 [2024-07-15 17:47:21.585019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.641 [2024-07-15 17:47:21.585063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.641 [2024-07-15 17:47:21.585081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.641 [2024-07-15 17:47:21.585320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.641 [2024-07-15 17:47:21.585561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.641 [2024-07-15 17:47:21.585587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.641 [2024-07-15 17:47:21.585603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.589174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.598442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.598887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.598919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.598937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.599185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.599429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.599454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.599470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.603040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.612312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.612727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.612759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.612777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.613026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.613269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.613294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.613310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.616866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.626330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.626784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.626816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.626834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.627080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.627323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.627348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.627363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.630932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.640185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.640617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.640650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.640668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.640917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.641160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.641185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.641207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.644768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.654017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.654458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.654489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.654507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.654746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.654997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.655023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.655038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.658592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.667841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.668290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.668322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.668340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.668578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.668820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.668846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.668862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.672429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.681669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.682106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.682138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.682155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.682394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.682635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.682660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.682676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.686242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.695688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.696107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.696145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.696164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.696402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.696645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.696670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.696686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.700256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.709700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.710129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.710161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.710179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.710417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.710658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.710683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.710699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.714269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.723717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.724143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.724176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.724194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.724431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.724673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.724699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.642 [2024-07-15 17:47:21.724715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.642 [2024-07-15 17:47:21.728285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.642 [2024-07-15 17:47:21.737729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.642 [2024-07-15 17:47:21.738204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.642 [2024-07-15 17:47:21.738237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.642 [2024-07-15 17:47:21.738256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.642 [2024-07-15 17:47:21.738495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.642 [2024-07-15 17:47:21.738742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.642 [2024-07-15 17:47:21.738766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.643 [2024-07-15 17:47:21.738782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.643 [2024-07-15 17:47:21.742348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.643 [2024-07-15 17:47:21.751607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.643 [2024-07-15 17:47:21.752039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.643 [2024-07-15 17:47:21.752074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.643 [2024-07-15 17:47:21.752092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.643 [2024-07-15 17:47:21.752330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.643 [2024-07-15 17:47:21.752571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.643 [2024-07-15 17:47:21.752597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.643 [2024-07-15 17:47:21.752613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.643 [2024-07-15 17:47:21.756179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.643 [2024-07-15 17:47:21.765622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.643 [2024-07-15 17:47:21.766073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.643 [2024-07-15 17:47:21.766105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.643 [2024-07-15 17:47:21.766124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.643 [2024-07-15 17:47:21.766362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.643 [2024-07-15 17:47:21.766604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.643 [2024-07-15 17:47:21.766629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.643 [2024-07-15 17:47:21.766644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.643 [2024-07-15 17:47:21.770208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.779453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.779896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.779928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.779946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.780185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.780427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.780452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.780469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.784041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.793283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.793704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.793735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.793753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.794003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.794245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.794270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.794286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.797843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.807304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.807748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.807780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.807799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.808046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.808289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.808313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.808329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.811894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.821136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.821561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.821592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.821610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.821848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.822099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.822124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.822140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.825707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.834969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.835424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.835455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.835479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.835718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.835969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.835995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.836011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.839572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.848813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.849274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.849305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.849323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.849561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.849802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.849828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.849843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.853407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.862649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.863101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.863134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.863152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.863391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.863635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.863660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.863677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.867243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.876480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.876931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.876962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.903 [2024-07-15 17:47:21.876980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.903 [2024-07-15 17:47:21.877219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.903 [2024-07-15 17:47:21.877460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.903 [2024-07-15 17:47:21.877490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.903 [2024-07-15 17:47:21.877507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.903 [2024-07-15 17:47:21.881073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.903 [2024-07-15 17:47:21.890309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.903 [2024-07-15 17:47:21.890746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.903 [2024-07-15 17:47:21.890778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.890795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.891045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.891287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.891313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.891329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.894892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.904127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.904577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.904608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.904626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.904864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.905116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.905142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.905158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.908715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.917962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.918380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.918411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.918429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.918667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.918919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.918946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.918962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.922520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.931985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.932425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.932456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.932473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.932711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.932964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.932990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.933007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.936562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.945830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.946267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.946299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.946317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.946555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.946796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.946821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.946837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.950404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.959853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.960304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.960355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.960593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.960837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.960863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.960888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.964449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.973689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.974139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.974173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.974191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.974435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.974679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.974705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.974721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.978288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:21.987527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:21.987950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:21.987982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:21.987999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:21.988238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:21.988479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.904 [2024-07-15 17:47:21.988515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.904 [2024-07-15 17:47:21.988531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.904 [2024-07-15 17:47:21.992099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.904 [2024-07-15 17:47:22.001546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.904 [2024-07-15 17:47:22.002000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.904 [2024-07-15 17:47:22.002032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.904 [2024-07-15 17:47:22.002051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.904 [2024-07-15 17:47:22.002289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.904 [2024-07-15 17:47:22.002531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.905 [2024-07-15 17:47:22.002555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.905 [2024-07-15 17:47:22.002571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.905 [2024-07-15 17:47:22.006137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.905 [2024-07-15 17:47:22.015381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.905 [2024-07-15 17:47:22.015830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.905 [2024-07-15 17:47:22.015861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.905 [2024-07-15 17:47:22.015888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.905 [2024-07-15 17:47:22.016128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.905 [2024-07-15 17:47:22.016370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.905 [2024-07-15 17:47:22.016395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.905 [2024-07-15 17:47:22.016416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.905 [2024-07-15 17:47:22.019985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.905 [2024-07-15 17:47:22.029237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.905 [2024-07-15 17:47:22.029656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.905 [2024-07-15 17:47:22.029688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:26.905 [2024-07-15 17:47:22.029706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:26.905 [2024-07-15 17:47:22.029954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:26.905 [2024-07-15 17:47:22.030197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.905 [2024-07-15 17:47:22.030223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.905 [2024-07-15 17:47:22.030239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.905 [2024-07-15 17:47:22.033796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.166 [2024-07-15 17:47:22.043079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.166 [2024-07-15 17:47:22.043524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.166 [2024-07-15 17:47:22.043556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.166 [2024-07-15 17:47:22.043574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.166 [2024-07-15 17:47:22.043812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.166 [2024-07-15 17:47:22.044064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.166 [2024-07-15 17:47:22.044090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.166 [2024-07-15 17:47:22.044105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.166 [2024-07-15 17:47:22.047671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.166 [2024-07-15 17:47:22.056925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.166 [2024-07-15 17:47:22.057370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.166 [2024-07-15 17:47:22.057401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.166 [2024-07-15 17:47:22.057418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.166 [2024-07-15 17:47:22.057656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.166 [2024-07-15 17:47:22.057910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.166 [2024-07-15 17:47:22.057936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.166 [2024-07-15 17:47:22.057953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.166 [2024-07-15 17:47:22.061510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.166 [2024-07-15 17:47:22.070851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.166 [2024-07-15 17:47:22.071286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.166 [2024-07-15 17:47:22.071319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.166 [2024-07-15 17:47:22.071337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.166 [2024-07-15 17:47:22.071576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.166 [2024-07-15 17:47:22.071818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.166 [2024-07-15 17:47:22.071843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.166 [2024-07-15 17:47:22.071859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.166 [2024-07-15 17:47:22.075431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.084685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.085141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.085173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.085190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.085429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.085671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.085696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.085712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.089277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.098530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.098972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.099004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.099023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.099261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.099504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.099528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.099545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.103113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.112361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.112803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.112834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.112853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.113099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.113347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.113372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.113388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.117171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.126219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.126640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.126672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.126690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.126940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.127183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.127208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.127224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.130782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.140247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.140671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.140704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.140721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.140971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.141214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.141239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.141254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.144812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.154268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.154729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.154760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.154778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.155027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.155269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.155294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.155310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.158873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.168120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.168564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.168596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.168614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.168852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.169102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.169128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.169144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.172700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.181950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.182393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.182425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.182442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.182680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.182932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.182956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.182973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.186532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.195777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.196226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.196259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.196276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.196515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.196757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.196782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.196797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.200364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.209604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.210025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.210065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.210084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.210324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.210567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.210592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.210608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.214172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.167 [2024-07-15 17:47:22.223621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.167 [2024-07-15 17:47:22.224079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.167 [2024-07-15 17:47:22.224112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.167 [2024-07-15 17:47:22.224131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.167 [2024-07-15 17:47:22.224370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.167 [2024-07-15 17:47:22.224614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.167 [2024-07-15 17:47:22.224639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.167 [2024-07-15 17:47:22.224655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.167 [2024-07-15 17:47:22.228229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.168 [2024-07-15 17:47:22.237463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.168 [2024-07-15 17:47:22.237884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.168 [2024-07-15 17:47:22.237915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.168 [2024-07-15 17:47:22.237933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.168 [2024-07-15 17:47:22.238171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.168 [2024-07-15 17:47:22.238413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.168 [2024-07-15 17:47:22.238438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.168 [2024-07-15 17:47:22.238455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.168 [2024-07-15 17:47:22.242022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.168 [2024-07-15 17:47:22.251474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.168 [2024-07-15 17:47:22.251928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.168 [2024-07-15 17:47:22.251960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.168 [2024-07-15 17:47:22.251978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.168 [2024-07-15 17:47:22.252216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.168 [2024-07-15 17:47:22.252464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.168 [2024-07-15 17:47:22.252489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.168 [2024-07-15 17:47:22.252504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.168 [2024-07-15 17:47:22.256069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.168 [2024-07-15 17:47:22.265309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.168 [2024-07-15 17:47:22.265768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.168 [2024-07-15 17:47:22.265800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.168 [2024-07-15 17:47:22.265818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.168 [2024-07-15 17:47:22.266068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.168 [2024-07-15 17:47:22.266310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.168 [2024-07-15 17:47:22.266335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.168 [2024-07-15 17:47:22.266351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.168 [2024-07-15 17:47:22.269914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.168 [2024-07-15 17:47:22.279149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.168 [2024-07-15 17:47:22.279606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.168 [2024-07-15 17:47:22.279638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.168 [2024-07-15 17:47:22.279656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.168 [2024-07-15 17:47:22.279904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.168 [2024-07-15 17:47:22.280147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.168 [2024-07-15 17:47:22.280172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.168 [2024-07-15 17:47:22.280188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.168 [2024-07-15 17:47:22.283743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.168 [2024-07-15 17:47:22.292987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.168 [2024-07-15 17:47:22.293403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.168 [2024-07-15 17:47:22.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.168 [2024-07-15 17:47:22.293452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.168 [2024-07-15 17:47:22.293689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.168 [2024-07-15 17:47:22.293942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.168 [2024-07-15 17:47:22.293968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.168 [2024-07-15 17:47:22.293985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.168 [2024-07-15 17:47:22.297548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.306799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.307235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.307286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.307525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.307768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.307794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.307810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.311379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.320821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.321282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.321313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.321331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.321569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.321811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.321836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.321853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.325417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.334662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.335111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.335143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.335161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.335399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.335641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.335666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.335681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.339250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.348485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.348939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.348995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.349234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.349476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.349501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.349517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.353082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.362320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.362776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.362807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.362825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.363072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.363313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.363339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.363355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.366923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.376160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.376612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.376644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.376661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.376910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.377152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.377178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.377194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.380747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.389991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.390433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.390465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.390483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.430 [2024-07-15 17:47:22.390722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.430 [2024-07-15 17:47:22.390976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.430 [2024-07-15 17:47:22.391008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.430 [2024-07-15 17:47:22.391025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.430 [2024-07-15 17:47:22.394581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.430 [2024-07-15 17:47:22.403819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.430 [2024-07-15 17:47:22.404245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.430 [2024-07-15 17:47:22.404278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.430 [2024-07-15 17:47:22.404296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.404534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.404776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.404802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.404817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.408379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.417824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.418273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.418304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.418322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.418560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.418801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.418827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.418843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.422406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.431646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.432100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.432132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.432149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.432387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.432628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.432653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.432669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.436236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.445471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.445900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.445951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.446189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.446431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.446456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.446473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.450040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.459501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.459925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.459958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.459976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.460216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.460459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.460485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.460501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.464066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.473518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.473987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.474021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.474039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.474278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.474519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.474545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.474561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.478129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.487364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.487807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.487839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.487857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.488113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.488354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.488379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.488396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.491967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.501219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.501634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.501667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.501685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.501937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.502179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.502204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.502220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.505782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.515050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.515490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.515523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.515541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.515780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.516035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.516061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.516078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.519638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.528900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.529331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.529363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.529381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.529619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.529861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.529899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.529922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.533481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.542727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.543229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.543260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.543278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.431 [2024-07-15 17:47:22.543515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.431 [2024-07-15 17:47:22.543756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.431 [2024-07-15 17:47:22.543781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.431 [2024-07-15 17:47:22.543798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.431 [2024-07-15 17:47:22.547369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.431 [2024-07-15 17:47:22.556611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.431 [2024-07-15 17:47:22.557063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.431 [2024-07-15 17:47:22.557095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.431 [2024-07-15 17:47:22.557112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.432 [2024-07-15 17:47:22.557350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.432 [2024-07-15 17:47:22.557592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.432 [2024-07-15 17:47:22.557618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.432 [2024-07-15 17:47:22.557634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.432 [2024-07-15 17:47:22.561209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.692 [2024-07-15 17:47:22.570468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.692 [2024-07-15 17:47:22.570913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.692 [2024-07-15 17:47:22.570945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.692 [2024-07-15 17:47:22.570963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.692 [2024-07-15 17:47:22.571202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.692 [2024-07-15 17:47:22.571445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.692 [2024-07-15 17:47:22.571471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.692 [2024-07-15 17:47:22.571487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.692 [2024-07-15 17:47:22.575057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.692 [2024-07-15 17:47:22.584337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.692 [2024-07-15 17:47:22.584794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.692 [2024-07-15 17:47:22.584826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.692 [2024-07-15 17:47:22.584844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.692 [2024-07-15 17:47:22.585092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.692 [2024-07-15 17:47:22.585335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.692 [2024-07-15 17:47:22.585360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.692 [2024-07-15 17:47:22.585376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.692 [2024-07-15 17:47:22.588944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.692 [2024-07-15 17:47:22.598188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.692 [2024-07-15 17:47:22.598641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.692 [2024-07-15 17:47:22.598673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.692 [2024-07-15 17:47:22.598690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.692 [2024-07-15 17:47:22.598942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.692 [2024-07-15 17:47:22.599184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.692 [2024-07-15 17:47:22.599210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.692 [2024-07-15 17:47:22.599227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.692 [2024-07-15 17:47:22.602789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.692 [2024-07-15 17:47:22.612046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.692 [2024-07-15 17:47:22.612493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.692 [2024-07-15 17:47:22.612524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.692 [2024-07-15 17:47:22.612542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.692 [2024-07-15 17:47:22.612780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.692 [2024-07-15 17:47:22.613036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.692 [2024-07-15 17:47:22.613062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.613079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.616638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.625903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.626369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.626401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.626419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.626657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.626918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.626944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.626959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.630518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.639773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.640226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.640257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.640275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.640514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.640757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.640781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.640797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.644367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.653617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.654060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.654091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.654109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.654348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.654591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.654615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.654631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.658197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.667448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.667893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.667924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.667942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.668180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.668423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.668447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.668463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.672039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.681288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.681780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.681811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.681828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.682079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.682323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.682347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.682363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.685948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.695207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.695649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.695682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.695702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.695952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.696195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.696220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.696236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.699799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.709106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.709557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.709588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.709606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.709845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.710097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.710122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.710138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.713697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.722941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.723440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.723471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.723495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.723734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.723991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.724017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.724032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.727593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.736829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.737293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.737324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.737342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.737580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.737823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.737847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.737864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.741426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.750677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.751149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.751180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.751198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.751435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.751677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.751702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.693 [2024-07-15 17:47:22.751718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.693 [2024-07-15 17:47:22.755291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.693 [2024-07-15 17:47:22.764568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.693 [2024-07-15 17:47:22.764997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-15 17:47:22.765030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.693 [2024-07-15 17:47:22.765048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.693 [2024-07-15 17:47:22.765286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.693 [2024-07-15 17:47:22.765528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.693 [2024-07-15 17:47:22.765558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.694 [2024-07-15 17:47:22.765575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.694 [2024-07-15 17:47:22.769168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.694 [2024-07-15 17:47:22.778425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.694 [2024-07-15 17:47:22.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-15 17:47:22.778929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.694 [2024-07-15 17:47:22.778947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.694 [2024-07-15 17:47:22.779186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.694 [2024-07-15 17:47:22.779429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.694 [2024-07-15 17:47:22.779454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.694 [2024-07-15 17:47:22.779470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.694 [2024-07-15 17:47:22.783048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.694 [2024-07-15 17:47:22.792301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.694 [2024-07-15 17:47:22.792755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-15 17:47:22.792786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.694 [2024-07-15 17:47:22.792804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.694 [2024-07-15 17:47:22.793055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.694 [2024-07-15 17:47:22.793297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.694 [2024-07-15 17:47:22.793323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.694 [2024-07-15 17:47:22.793339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.694 [2024-07-15 17:47:22.796906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.694 [2024-07-15 17:47:22.806160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.694 [2024-07-15 17:47:22.806598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-15 17:47:22.806629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.694 [2024-07-15 17:47:22.806647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.694 [2024-07-15 17:47:22.806896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.694 [2024-07-15 17:47:22.807138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.694 [2024-07-15 17:47:22.807163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.694 [2024-07-15 17:47:22.807180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.694 [2024-07-15 17:47:22.810741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.694 [2024-07-15 17:47:22.820016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.694 [2024-07-15 17:47:22.820436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-15 17:47:22.820468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.694 [2024-07-15 17:47:22.820488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.694 [2024-07-15 17:47:22.820726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.694 [2024-07-15 17:47:22.820979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.694 [2024-07-15 17:47:22.821005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.694 [2024-07-15 17:47:22.821021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.694 [2024-07-15 17:47:22.824583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.833856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.834311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.834343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.834361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.834600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.834841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.834867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.834895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.838458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.847706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.848164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.848196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.848214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.848452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.848694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.848719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.848735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.852316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.861563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.862011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.862051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.862076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.862315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.862556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.862582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.862598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.866172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.875428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.875869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.875910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.875928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.876166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.876408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.876434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.876449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.880022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.889266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.889706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.889738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.889756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.890011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.890256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.890281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.890297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.893859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.903110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.903563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.903595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.903612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.903850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.904105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.904137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.904154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.907713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.916966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.917410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.917441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.917459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.917697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.917953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.917979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.917995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.921554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.930802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.931254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.931286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.931304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.931542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.931784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.931809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.931825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.935398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.944640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.945092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.945124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.945141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.945380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.945621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.945646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.945663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.956 [2024-07-15 17:47:22.949233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.956 [2024-07-15 17:47:22.958478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.956 [2024-07-15 17:47:22.958928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.956 [2024-07-15 17:47:22.958960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.956 [2024-07-15 17:47:22.958978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.956 [2024-07-15 17:47:22.959216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.956 [2024-07-15 17:47:22.959458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.956 [2024-07-15 17:47:22.959483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.956 [2024-07-15 17:47:22.959500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:22.963069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:22.972314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:22.972763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:22.972794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:22.972813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:22.973064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:22.973307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:22.973332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:22.973348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:22.976919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:22.986163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:22.986594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:22.986625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:22.986643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:22.986892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:22.987135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:22.987160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:22.987177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:22.990739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:22.999992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.000409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.000441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.000458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.000706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.000962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.000996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.001012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.004574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.013829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.014300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.014332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.014350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.014587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.014829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.014855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.014871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.018444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.027696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.028160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.028192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.028210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.028449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.028690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.028716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.028733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.032303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.041557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.041999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.042032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.042050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.042288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.042531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.042557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.042579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.046153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.055403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.055856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.055896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.055916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.056156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.056397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.056422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.056437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.060006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.069252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.069699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.069730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.069748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.070000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.070243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.070269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.070285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.073843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.957 [2024-07-15 17:47:23.083097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.957 [2024-07-15 17:47:23.083538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.957 [2024-07-15 17:47:23.083570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:27.957 [2024-07-15 17:47:23.083588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:27.957 [2024-07-15 17:47:23.083825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:27.957 [2024-07-15 17:47:23.084080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.957 [2024-07-15 17:47:23.084107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.957 [2024-07-15 17:47:23.084123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.957 [2024-07-15 17:47:23.087686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.232 [2024-07-15 17:47:23.097044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.097492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.097529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.097548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.097787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.098043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.098070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.098086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.101647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.110900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.111339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.111371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.111389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.111627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.111868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.111905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.111922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.115481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.124733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.125168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.125219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.125458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.125701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.125727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.125744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.129324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.138803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.139256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.139289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.139308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.139547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.139797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.139822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.139838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.143407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.152663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.153116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.153148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.153166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.153405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.153647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.153672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.153688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.157258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.166516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.166975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.167006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.167024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.167262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.167505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.167529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.167544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.171115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.180370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.180810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.180842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.180860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.181108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.181359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.181384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.181401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.184996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.194258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.194676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.194707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.194725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.194975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.195220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.195244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.195260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.198822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.208093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.208699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.208755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.208773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.209025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.209269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.209294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.209310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.212873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.221941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.222359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.222390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.222408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.222646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.222901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.222926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.222943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.226502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.235757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.236211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.236242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.236267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.236506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.236749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.236774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.236790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.240355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.249605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.250038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.250069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.250087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.250325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.250568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.250593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.250608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.254180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.263430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.263869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.263908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.263927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.264165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.264407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.264432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.264448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.268013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.277266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.277706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.277738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.277756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.278006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.278250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.278281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.278298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.281860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.291137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.291644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.291675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.291692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.291943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.292184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.292209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.292225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.295786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.305052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.305491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.305522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.305540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.305778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.306031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.306057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.306072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.309634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.318895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.319502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.319560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.319578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.233 [2024-07-15 17:47:23.319816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.233 [2024-07-15 17:47:23.320069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.233 [2024-07-15 17:47:23.320095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.233 [2024-07-15 17:47:23.320111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.233 [2024-07-15 17:47:23.323670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.233 [2024-07-15 17:47:23.332724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.233 [2024-07-15 17:47:23.333178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.233 [2024-07-15 17:47:23.333210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.233 [2024-07-15 17:47:23.333227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.234 [2024-07-15 17:47:23.333466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.234 [2024-07-15 17:47:23.333708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.234 [2024-07-15 17:47:23.333733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.234 [2024-07-15 17:47:23.333750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.234 [2024-07-15 17:47:23.337319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.234 [2024-07-15 17:47:23.346601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.234 [2024-07-15 17:47:23.347047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.234 [2024-07-15 17:47:23.347079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.234 [2024-07-15 17:47:23.347096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.234 [2024-07-15 17:47:23.347335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.234 [2024-07-15 17:47:23.347576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.234 [2024-07-15 17:47:23.347601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.234 [2024-07-15 17:47:23.347617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.234 [2024-07-15 17:47:23.351192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.234 [2024-07-15 17:47:23.360446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.234 [2024-07-15 17:47:23.360889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.234 [2024-07-15 17:47:23.360921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.234 [2024-07-15 17:47:23.360939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.234 [2024-07-15 17:47:23.361178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.234 [2024-07-15 17:47:23.361420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.234 [2024-07-15 17:47:23.361446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.234 [2024-07-15 17:47:23.361462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.234 [2024-07-15 17:47:23.365033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.374294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.374754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.374786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.374804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.375063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.375306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.375331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.375347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.378912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.388156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.388606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.388637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.388655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.388906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.389148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.389173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.389190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.392747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.402012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.402451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.402483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.402500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.402737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.402994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.403020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.403037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.406594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.415840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.416286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.416318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.416336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.416575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.416816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.416842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.416865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.420442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.429713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.430128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.430160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.430177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.430416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.430658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.430683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.430698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.434268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.443727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.444153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.444185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.444203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.444441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.444682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.444708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.444723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.448295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.457749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.458208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.458239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.458257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.458494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.458736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.458760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.458775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.462344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.471600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.472020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.472051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.472069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.472306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.472547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.472571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.472586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.476165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.485063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.485472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.485497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.485512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.485761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.485993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.486022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.486036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.489080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.498390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.498766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.498792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.498808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.498 [2024-07-15 17:47:23.499062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.498 [2024-07-15 17:47:23.499286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.498 [2024-07-15 17:47:23.499305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.498 [2024-07-15 17:47:23.499318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.498 [2024-07-15 17:47:23.502287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.498 [2024-07-15 17:47:23.511786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.498 [2024-07-15 17:47:23.512226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.498 [2024-07-15 17:47:23.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.498 [2024-07-15 17:47:23.512270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.512518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.512723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.512743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.512755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.515774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.525106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.525495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.525521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.525536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.525784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.526011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.526031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.526044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.529021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.538460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.538886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.538915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.538931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.539173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.539371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.539390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.539402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.542381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.551685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.552115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.552143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.552158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.552413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.552611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.552630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.552647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.555631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.564880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.565263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.565290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.565304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.565542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.565756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.565776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.565788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.568745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.578161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.578596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.578638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.578652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.578930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.579141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.579176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.579189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.582078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.591372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.591806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.591846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.591862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.592099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.592334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.592353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.592365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.595334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.604655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.605081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.605113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.605130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.605384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.605582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.605601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.605613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.608608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.617898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.618334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.618376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.618392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.499 [2024-07-15 17:47:23.618645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.499 [2024-07-15 17:47:23.618844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.499 [2024-07-15 17:47:23.618863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.499 [2024-07-15 17:47:23.618882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.499 [2024-07-15 17:47:23.621856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.499 [2024-07-15 17:47:23.631343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.499 [2024-07-15 17:47:23.631808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.499 [2024-07-15 17:47:23.631835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.499 [2024-07-15 17:47:23.631849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.759 [2024-07-15 17:47:23.632117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.759 [2024-07-15 17:47:23.632352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.759 [2024-07-15 17:47:23.632372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.759 [2024-07-15 17:47:23.632385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.759 [2024-07-15 17:47:23.635451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.759 [2024-07-15 17:47:23.644615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.759 [2024-07-15 17:47:23.645042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.759 [2024-07-15 17:47:23.645069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.759 [2024-07-15 17:47:23.645084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.759 [2024-07-15 17:47:23.645317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.759 [2024-07-15 17:47:23.645520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.759 [2024-07-15 17:47:23.645539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.759 [2024-07-15 17:47:23.645551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.759 [2024-07-15 17:47:23.648516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.759 [2024-07-15 17:47:23.657786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.759 [2024-07-15 17:47:23.658249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.759 [2024-07-15 17:47:23.658289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.759 [2024-07-15 17:47:23.658306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.759 [2024-07-15 17:47:23.658538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.759 [2024-07-15 17:47:23.658736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.759 [2024-07-15 17:47:23.658755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.759 [2024-07-15 17:47:23.658767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.759 [2024-07-15 17:47:23.661738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.759 [2024-07-15 17:47:23.671143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.759 [2024-07-15 17:47:23.671565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.759 [2024-07-15 17:47:23.671592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.759 [2024-07-15 17:47:23.671622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.759 [2024-07-15 17:47:23.671874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.759 [2024-07-15 17:47:23.672106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.672126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.672138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.675066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.684342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.684779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.684821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.684836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.685085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.685301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.685320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.685333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.688317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.697585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.698010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.698038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.698054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.698309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.698507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.698526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.698539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.701503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.710801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.711186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.711228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.711243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.711494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.711691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.711710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.711722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.714704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.724130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.724542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.724569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.724599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.724847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.725085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.725108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.725121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.728117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.737435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.737797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.737837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.737856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.738117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.738339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.738358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.738370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.741339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.750624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.751048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.751076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.751092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.751346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.751544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.751563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.751575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.754593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.764143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.764557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.764584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.764599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.764842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.765076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.765097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.765110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.768173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.777440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.777880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.777931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.777947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.778200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.778416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.778441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.778454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.781503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.790817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.791251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.791278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.791308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.791562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.791761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.791780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.791793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.794803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.804146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.804579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.804619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.804635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.804867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.805073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.760 [2024-07-15 17:47:23.805093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.760 [2024-07-15 17:47:23.805106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.760 [2024-07-15 17:47:23.808077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.760 [2024-07-15 17:47:23.817429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.760 [2024-07-15 17:47:23.817866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.760 [2024-07-15 17:47:23.817915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.760 [2024-07-15 17:47:23.817931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.760 [2024-07-15 17:47:23.818183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.760 [2024-07-15 17:47:23.818381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.818400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.818412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.821459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.761 [2024-07-15 17:47:23.830795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.761 [2024-07-15 17:47:23.831264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.761 [2024-07-15 17:47:23.831305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.761 [2024-07-15 17:47:23.831322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.761 [2024-07-15 17:47:23.831574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.761 [2024-07-15 17:47:23.831772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.831790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.831803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.834807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.761 [2024-07-15 17:47:23.844174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.761 [2024-07-15 17:47:23.844568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.761 [2024-07-15 17:47:23.844595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.761 [2024-07-15 17:47:23.844626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.761 [2024-07-15 17:47:23.844873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.761 [2024-07-15 17:47:23.845098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.845119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.845131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.848103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.761 [2024-07-15 17:47:23.857467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.761 [2024-07-15 17:47:23.857871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.761 [2024-07-15 17:47:23.857903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.761 [2024-07-15 17:47:23.857918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.761 [2024-07-15 17:47:23.858174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.761 [2024-07-15 17:47:23.858372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.858391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.858403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.861456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.761 [2024-07-15 17:47:23.870764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.761 [2024-07-15 17:47:23.871165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.761 [2024-07-15 17:47:23.871206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.761 [2024-07-15 17:47:23.871221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.761 [2024-07-15 17:47:23.871426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.761 [2024-07-15 17:47:23.871640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.871659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.871671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.874643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.761 [2024-07-15 17:47:23.884108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.761 [2024-07-15 17:47:23.884526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.761 [2024-07-15 17:47:23.884552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:28.761 [2024-07-15 17:47:23.884581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:28.761 [2024-07-15 17:47:23.884813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:28.761 [2024-07-15 17:47:23.885073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.761 [2024-07-15 17:47:23.885095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.761 [2024-07-15 17:47:23.885110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.761 [2024-07-15 17:47:23.888122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.021 [2024-07-15 17:47:23.897551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.021 [2024-07-15 17:47:23.897974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-15 17:47:23.898002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.021 [2024-07-15 17:47:23.898018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.021 [2024-07-15 17:47:23.898232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.021 [2024-07-15 17:47:23.898470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.021 [2024-07-15 17:47:23.898490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.021 [2024-07-15 17:47:23.898502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.021 [2024-07-15 17:47:23.901691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.021 [2024-07-15 17:47:23.910802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.021 [2024-07-15 17:47:23.911234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-15 17:47:23.911260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.021 [2024-07-15 17:47:23.911275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.021 [2024-07-15 17:47:23.911524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.021 [2024-07-15 17:47:23.911723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.021 [2024-07-15 17:47:23.911742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.021 [2024-07-15 17:47:23.911759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.021 [2024-07-15 17:47:23.914721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.021 [2024-07-15 17:47:23.924081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.021 [2024-07-15 17:47:23.924532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-15 17:47:23.924573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.021 [2024-07-15 17:47:23.924590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.021 [2024-07-15 17:47:23.924843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.925067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.925088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.925101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.928142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:23.937433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:23.937871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:23.937919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:23.937934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:23.938166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.938364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.938383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.938395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.941393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:23.950699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:23.951129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:23.951157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:23.951173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:23.951427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.951624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.951643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.951655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.954695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:23.963940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:23.964368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:23.964408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:23.964424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:23.964658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.964856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.964874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.964912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.967869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:23.977120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:23.977539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:23.977580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:23.977596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:23.977847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.978094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.978115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.978128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.981104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:23.990390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:23.990829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:23.990870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:23.990895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:23.991125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:23.991362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:23.991381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:23.991394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:23.994362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.003627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.004050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.004078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:24.004094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:24.004338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:24.004551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:24.004570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:24.004582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:24.007474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.017272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.017718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.017758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:24.017775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:24.018041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:24.018267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:24.018286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:24.018299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:24.021353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.030599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.031030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.031058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:24.031073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:24.031328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:24.031526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:24.031545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:24.031558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:24.034533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.043806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.044214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.044241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:24.044255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:24.044471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:24.044668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:24.044688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:24.044700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:24.047678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.057185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.057640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.057680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.022 [2024-07-15 17:47:24.057696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.022 [2024-07-15 17:47:24.057926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.022 [2024-07-15 17:47:24.058130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.022 [2024-07-15 17:47:24.058150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.022 [2024-07-15 17:47:24.058162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.022 [2024-07-15 17:47:24.061132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.022 [2024-07-15 17:47:24.070377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.022 [2024-07-15 17:47:24.070810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-15 17:47:24.070852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.070869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.071107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.071341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.071361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.071373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 [2024-07-15 17:47:24.074342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2333822 Killed "${NVMF_APP[@]}" "$@" 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.023 [2024-07-15 17:47:24.083794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.084201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.084229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.084244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.084462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.084660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.084679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.084696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2334779 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2334779 00:24:29.023 [2024-07-15 17:47:24.087780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2334779 ']' 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.023 17:47:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.023 [2024-07-15 17:47:24.097269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.097709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.097751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.097768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.098020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.098265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.098285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.098297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 [2024-07-15 17:47:24.101381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 [2024-07-15 17:47:24.110702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.111151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.111178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.111194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.111435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.111649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.111668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.111680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 [2024-07-15 17:47:24.114747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 [2024-07-15 17:47:24.124136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.124576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.124602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.124637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.124871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.125076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.125095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.125107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 [2024-07-15 17:47:24.128103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 [2024-07-15 17:47:24.135484] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:29.023 [2024-07-15 17:47:24.135556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.023 [2024-07-15 17:47:24.137520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.137963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.137992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.138008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.138250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.138463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.138483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.023 [2024-07-15 17:47:24.138496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.023 [2024-07-15 17:47:24.141521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.023 [2024-07-15 17:47:24.150934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.023 [2024-07-15 17:47:24.151336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-15 17:47:24.151364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.023 [2024-07-15 17:47:24.151380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.023 [2024-07-15 17:47:24.151609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.023 [2024-07-15 17:47:24.151823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.023 [2024-07-15 17:47:24.151843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.024 [2024-07-15 17:47:24.151855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.024 [2024-07-15 17:47:24.155405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.164421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.164869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.164918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.164939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.165191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.165390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.165409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.165421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.168423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.284 [2024-07-15 17:47:24.177836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.178286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.178314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.178329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.178572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.178777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.178796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.178809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.181905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.191190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.191577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.191604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.191619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.191861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.192095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.192116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.192129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.195226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.202743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.284 [2024-07-15 17:47:24.204561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.204999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.205028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.205043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.205286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.205496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.205516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.205529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.208604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.218043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.218618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.218671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.218692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.218965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.219181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.219202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.219219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.222298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.231372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.231835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.231884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.231903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.232118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.232357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.232377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.232391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.235479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.244718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.245168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.245197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.245214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.245457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.245662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.245682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.245695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.248756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.258081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.258509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.258554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.258571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.258849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.284 [2024-07-15 17:47:24.259092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.284 [2024-07-15 17:47:24.259115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.284 [2024-07-15 17:47:24.259129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.284 [2024-07-15 17:47:24.262614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.284 [2024-07-15 17:47:24.271525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.284 [2024-07-15 17:47:24.272065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.284 [2024-07-15 17:47:24.272103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.284 [2024-07-15 17:47:24.272124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.284 [2024-07-15 17:47:24.272375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.272584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.272605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.272621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.275701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.284923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.285301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.285328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.285343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.285567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.285772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.285792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.285805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.288928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.298325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.298766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.298793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.298819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.299043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.299273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.299293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.299307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.302415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.311751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.312236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.312264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.312280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.312504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.312722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.312743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.312757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.314000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.285 [2024-07-15 17:47:24.314034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.285 [2024-07-15 17:47:24.314048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.285 [2024-07-15 17:47:24.314059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.285 [2024-07-15 17:47:24.314069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.285 [2024-07-15 17:47:24.314250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.285 [2024-07-15 17:47:24.314306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.285 [2024-07-15 17:47:24.314310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.285 [2024-07-15 17:47:24.316003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.325356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.325957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.325997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.326017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.326252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.326475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.326497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.326514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.329801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.339083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.339675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.339712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.339731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.339968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.340193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.340215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.340244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.343524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.352769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.353369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.353409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.353428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.353653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.353884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.353907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.353934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.357269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.366375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.367010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.367049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.367069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.367310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.367525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.367546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.367562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.370691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.379938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.380460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.380496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.380525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.380763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.380994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.381016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.381032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.384245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.393449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.394067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.394109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.394130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.394369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.394585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.394607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.285 [2024-07-15 17:47:24.394624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.285 [2024-07-15 17:47:24.397790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.285 [2024-07-15 17:47:24.407074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.285 [2024-07-15 17:47:24.407509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.285 [2024-07-15 17:47:24.407537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.285 [2024-07-15 17:47:24.407553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.285 [2024-07-15 17:47:24.407767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.285 [2024-07-15 17:47:24.408025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.285 [2024-07-15 17:47:24.408046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.286 [2024-07-15 17:47:24.408061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.286 [2024-07-15 17:47:24.411270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.420624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.421011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.421039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.421055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.421284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.421495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.421525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.421539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.424820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.434211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.434611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.434639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.434654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.434891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.435103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.435123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.435136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.438329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.447710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.448101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.448129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.448145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.448359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.448586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.448607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.448620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.451648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.461252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.461678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.461705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.461721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.461958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.462170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.462191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.462204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.465393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.474732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.475124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.475152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.475168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.475382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.475607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.475628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.475641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.478793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.488278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.488650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.488678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.488694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.488932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.489144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.489164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.489177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.492367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.502030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.502437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.547 [2024-07-15 17:47:24.502465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.547 [2024-07-15 17:47:24.502480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.547 [2024-07-15 17:47:24.502694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.547 [2024-07-15 17:47:24.502929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.547 [2024-07-15 17:47:24.502950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.547 [2024-07-15 17:47:24.502963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.547 [2024-07-15 17:47:24.506175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.547 [2024-07-15 17:47:24.515526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.547 [2024-07-15 17:47:24.515945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.515974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.515989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.516208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.516437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.516458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.516471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.519760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.529168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.529578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.529605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.529620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.529834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.530090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.530112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.530126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.533332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.542676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.543106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.543134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.543149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.543362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.543590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.543610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.543623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.546924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.556288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.556717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.556745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.556760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.556984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.557217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.557238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.557256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.560406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.569808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.570229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.570273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.570487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.570714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.570734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.570747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.573924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.583423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.583852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.583886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.583903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.584117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.584346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.584367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.584380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.587567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.596888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.597261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.597304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.597532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.597742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.597763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.597776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.600805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.610354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.610767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.610798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.610815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.611036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.611268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.611288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.611301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.614451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.623819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.624229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.624257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.624273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.624501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.624711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.624731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.624744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.627921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.637418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.637812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.637840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.637855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.638076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.638306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.638327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.638340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.641530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.548 [2024-07-15 17:47:24.650907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.548 [2024-07-15 17:47:24.651385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.548 [2024-07-15 17:47:24.651413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.548 [2024-07-15 17:47:24.651428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.548 [2024-07-15 17:47:24.651656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.548 [2024-07-15 17:47:24.651871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.548 [2024-07-15 17:47:24.651900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.548 [2024-07-15 17:47:24.651914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.548 [2024-07-15 17:47:24.655063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.549 [2024-07-15 17:47:24.664411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.549 [2024-07-15 17:47:24.664832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.549 [2024-07-15 17:47:24.664859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.549 [2024-07-15 17:47:24.664882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.549 [2024-07-15 17:47:24.665112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.549 [2024-07-15 17:47:24.665323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.549 [2024-07-15 17:47:24.665343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.549 [2024-07-15 17:47:24.665356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.549 [2024-07-15 17:47:24.668546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.549 [2024-07-15 17:47:24.678006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.549 [2024-07-15 17:47:24.678394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.549 [2024-07-15 17:47:24.678422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.549 [2024-07-15 17:47:24.678438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.549 [2024-07-15 17:47:24.678666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.549 [2024-07-15 17:47:24.678902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.549 [2024-07-15 17:47:24.678924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.549 [2024-07-15 17:47:24.678937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.682283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.691451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.691846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.691873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.691898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.692112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.692340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.692361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.692374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.695567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.704880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.705265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.705295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.705310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.705539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.705750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.705770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.705784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.708960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.718278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.718675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.718703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.718718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.718941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.719159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.719193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.719207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.722357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.731730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.732138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.732166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.732182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.732396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.732623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.732643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.732656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.735807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.745124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.745520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.745548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.745568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.745798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.746039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.746061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.746074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.749242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.758668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.759090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.759118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.759133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.759347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.759573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.759594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.759607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.762722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.772292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.772709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.772737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.772752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.772988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.773200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.773221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.773234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.776446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.785833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.786208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.786236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.786252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.786465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.786692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.786717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.786730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.789909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.799240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.799629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.799656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.799672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.799906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.800118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.800139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.800152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.803340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.812720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.813141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.813169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.813185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.813398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.813623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.813644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.809 [2024-07-15 17:47:24.813657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.809 [2024-07-15 17:47:24.816804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.809 [2024-07-15 17:47:24.826151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.809 [2024-07-15 17:47:24.826541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.809 [2024-07-15 17:47:24.826568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.809 [2024-07-15 17:47:24.826583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.809 [2024-07-15 17:47:24.826811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.809 [2024-07-15 17:47:24.827058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.809 [2024-07-15 17:47:24.827080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.827094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.830267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.839654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.840058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.840088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.840104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.840318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.840544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.840564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.840577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.843732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.853080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.853504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.853532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.853547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.853761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.854018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.854041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.854054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.857271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.866616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.867051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.867079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.867095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.867322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.867533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.867553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.867566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.870724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.880049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.880427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.880455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.880476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.880705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.880945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.880968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.880981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.884135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.893488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.893893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.893921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.893936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.894164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.894375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.894395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.894408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.897598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.906971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.907367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.907394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.907410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.907623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.907850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.907871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.907908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.911109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.920431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.920827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.920855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.920871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.921093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.921323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.921348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.921362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.924514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.810 [2024-07-15 17:47:24.933845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.810 [2024-07-15 17:47:24.934267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.810 [2024-07-15 17:47:24.934295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:29.810 [2024-07-15 17:47:24.934311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:29.810 [2024-07-15 17:47:24.934525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:29.810 [2024-07-15 17:47:24.934753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.810 [2024-07-15 17:47:24.934773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.810 [2024-07-15 17:47:24.934786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.810 [2024-07-15 17:47:24.937963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.071 [2024-07-15 17:47:24.947474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.071 [2024-07-15 17:47:24.947890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.071 [2024-07-15 17:47:24.947918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.071 [2024-07-15 17:47:24.947934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.071 [2024-07-15 17:47:24.948148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.071 [2024-07-15 17:47:24.948365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.071 [2024-07-15 17:47:24.948386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.071 [2024-07-15 17:47:24.948400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.071 [2024-07-15 17:47:24.951647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.071 [2024-07-15 17:47:24.961040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.071 [2024-07-15 17:47:24.961466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.071 [2024-07-15 17:47:24.961494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.071 [2024-07-15 17:47:24.961510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.071 [2024-07-15 17:47:24.961724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.071 [2024-07-15 17:47:24.961980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.071 [2024-07-15 17:47:24.962002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.071 [2024-07-15 17:47:24.962016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.071 [2024-07-15 17:47:24.965173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.071 [2024-07-15 17:47:24.974566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.071 [2024-07-15 17:47:24.974965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.071 [2024-07-15 17:47:24.974993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.071 [2024-07-15 17:47:24.975010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.071 [2024-07-15 17:47:24.975238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.071 [2024-07-15 17:47:24.975450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:24.975470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:24.975483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:24.978631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:24.988033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:24.988436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:24.988464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:24.988480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:24.988694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:24.988931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:24.988953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:24.988965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:24.992142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.001475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.001868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.001904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.001920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.002135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.002362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.002383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.002396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.005602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.015010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.015409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.015438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.015455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.015692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.015913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.015934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.015948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.019101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.028456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.028858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.028893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.028910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.029138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.029350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.029371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.029384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.032701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.041966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.042365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.042392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.042408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.042622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.042849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.042889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.042904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.046076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.055444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.055843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.055884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.055902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.056115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.056347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.056368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.056386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.059579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.069007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.069396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.069423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.069439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.069665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.069884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.069905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.069918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.073161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.082482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.082888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.082917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.082933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.083147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.083364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.083385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.083399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.086692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.072 [2024-07-15 17:47:25.096072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.072 [2024-07-15 17:47:25.096451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.096481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.096496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.096711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.096938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.096960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.096979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.100273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 [2024-07-15 17:47:25.109650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.072 [2024-07-15 17:47:25.110086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.072 [2024-07-15 17:47:25.110114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.072 [2024-07-15 17:47:25.110130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.072 [2024-07-15 17:47:25.110345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.072 [2024-07-15 17:47:25.110571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.072 [2024-07-15 17:47:25.110591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.072 [2024-07-15 17:47:25.110604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.072 [2024-07-15 17:47:25.113777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.072 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.073 [2024-07-15 17:47:25.123339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.123760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.123788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.123804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.124027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.124257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.124278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.124291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 [2024-07-15 17:47:25.125197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.073 [2024-07-15 17:47:25.127578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.073 [2024-07-15 17:47:25.137004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.137466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.137494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.137510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.137751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.137994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.138016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.138031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 [2024-07-15 17:47:25.141197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 [2024-07-15 17:47:25.150538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.150940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.150968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.150983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.151211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.151421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.151442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.151455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 [2024-07-15 17:47:25.154667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 [2024-07-15 17:47:25.164098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.164750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.164794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.164814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.165047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.165284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.165305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.165323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 [2024-07-15 17:47:25.168480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 Malloc0 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.073 [2024-07-15 17:47:25.177805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.178257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.178286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.178302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.178532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.178752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.178773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.178787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.073 [2024-07-15 17:47:25.182028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:30.073 [2024-07-15 17:47:25.191372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.073 [2024-07-15 17:47:25.191788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.073 [2024-07-15 17:47:25.191816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb0ac0 with addr=10.0.0.2, port=4420 00:24:30.073 [2024-07-15 17:47:25.191831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0ac0 is same with the state(5) to be set 00:24:30.073 [2024-07-15 17:47:25.192054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb0ac0 (9): Bad file descriptor 00:24:30.073 [2024-07-15 17:47:25.192285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.073 [2024-07-15 17:47:25.192286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.073 [2024-07-15 17:47:25.192306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.073 [2024-07-15 17:47:25.192321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.073 [2024-07-15 17:47:25.195504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.073 17:47:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2334110 00:24:30.073 [2024-07-15 17:47:25.204911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.332 [2024-07-15 17:47:25.239692] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:38.446 00:24:38.446 Latency(us) 00:24:38.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.446 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:38.446 Verification LBA range: start 0x0 length 0x4000 00:24:38.446 Nvme1n1 : 15.01 6402.76 25.01 10762.67 0.00 7433.36 837.40 22233.69 00:24:38.446 =================================================================================================================== 00:24:38.446 Total : 6402.76 25.01 10762.67 0.00 7433.36 837.40 22233.69 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.704 rmmod nvme_tcp 00:24:38.704 rmmod nvme_fabrics 00:24:38.704 rmmod nvme_keyring 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2334779 ']' 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2334779 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2334779 ']' 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2334779 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.704 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2334779 00:24:38.961 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:38.961 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:38.961 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2334779' 00:24:38.961 killing process with pid 2334779 00:24:38.961 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2334779 00:24:38.961 17:47:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2334779 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.222 17:47:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.156 17:47:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.156 00:24:41.156 real 0m22.439s 00:24:41.156 user 1m0.705s 00:24:41.156 sys 0m4.077s 00:24:41.156 17:47:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:41.156 17:47:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.156 ************************************ 00:24:41.156 END TEST nvmf_bdevperf 00:24:41.156 ************************************ 00:24:41.156 17:47:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:41.156 17:47:36 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:41.156 17:47:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:41.156 17:47:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.156 17:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.156 ************************************ 00:24:41.156 START TEST nvmf_target_disconnect 00:24:41.156 ************************************ 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:41.156 * Looking for test storage... 00:24:41.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.156 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.415 17:47:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.416 17:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.319 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.320 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.320 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.579 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.579 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:24:43.580 00:24:43.580 --- 10.0.0.2 ping statistics --- 00:24:43.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.580 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:24:43.580 00:24:43.580 --- 10.0.0.1 ping statistics --- 00:24:43.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.580 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:43.580 ************************************ 00:24:43.580 START TEST nvmf_target_disconnect_tc1 00:24:43.580 ************************************ 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.580 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.580 [2024-07-15 17:47:38.628744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.580 [2024-07-15 17:47:38.628821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103e1a0 with addr=10.0.0.2, port=4420 00:24:43.580 [2024-07-15 17:47:38.628864] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.580 [2024-07-15 17:47:38.628902] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.580 [2024-07-15 17:47:38.628919] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:43.580 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:43.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:43.580 Initializing NVMe Controllers 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:43.580 00:24:43.580 real 0m0.101s 00:24:43.580 user 0m0.038s 00:24:43.580 sys 0m0.062s 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:43.580 ************************************ 00:24:43.580 END TEST nvmf_target_disconnect_tc1 00:24:43.580 ************************************ 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:43.580 ************************************ 00:24:43.580 START TEST nvmf_target_disconnect_tc2 00:24:43.580 ************************************ 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2337926 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2337926 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2337926 ']' 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.580 17:47:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:43.841 [2024-07-15 17:47:38.744947] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:43.841 [2024-07-15 17:47:38.745020] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.841 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.841 [2024-07-15 17:47:38.813870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.841 [2024-07-15 17:47:38.925041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.841 [2024-07-15 17:47:38.925090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.841 [2024-07-15 17:47:38.925118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.841 [2024-07-15 17:47:38.925129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.841 [2024-07-15 17:47:38.925138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.841 [2024-07-15 17:47:38.925408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:43.841 [2024-07-15 17:47:38.925466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:43.841 [2024-07-15 17:47:38.925575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:43.841 [2024-07-15 17:47:38.925582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 Malloc0 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 [2024-07-15 17:47:39.719794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 [2024-07-15 17:47:39.748071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2338081 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.775 17:47:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:44.775 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.678 17:47:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2337926 00:24:46.678 17:47:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 [2024-07-15 17:47:41.772217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Write completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 [2024-07-15 17:47:41.772569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.678 Read completed with error (sct=0, sc=8) 00:24:46.678 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Read completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 Write completed with error (sct=0, sc=8) 00:24:46.679 starting I/O failed 00:24:46.679 [2024-07-15 17:47:41.772950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:46.679 [2024-07-15 17:47:41.773163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.773203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.773370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.773397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.773596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.773622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.773800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.773845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.774032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.774060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.774212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.774239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.774388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.774413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.774637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.774679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.774890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.774917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.775063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.775088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.775297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.775337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.775613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.775644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.775871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.775903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.776055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.776081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.776237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.776262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.776442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.776468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.776661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.776703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.776893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.776920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.777066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.777092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.777270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.777295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.777443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.777468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.777634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.777659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.777826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.778034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.778210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.778403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.778645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.778814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.778977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.779192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.779378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.779559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.779770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.779956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.779982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.780148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.780173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.780348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.780373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.780537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.780564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.780776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.780804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.780998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.781024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.781166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.781192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.781394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.781598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.781801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.781828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.782005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.679 [2024-07-15 17:47:41.782031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.679 qpair failed and we were unable to recover it. 00:24:46.679 [2024-07-15 17:47:41.782173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.782198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.782370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.782411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.782591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.782619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.782842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.782872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.783057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.783083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.783252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.783277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.783447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.783488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.783632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.783660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.783850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.783875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.784016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.784043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.784212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.784237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.784423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.784448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.784706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.784735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.784975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.785168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.785357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.785549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.785760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.785924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.785950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.786126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.786284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.786472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.786659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.786999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.787025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.787169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.787194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.787400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.787428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.787652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.787678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.787814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.787839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.788027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.788294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.788320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.788525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.788553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.788745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.788770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.788940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.788966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.789109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.789135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.789300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.789326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.789520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.789546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.789806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.789964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.789990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.790192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.790220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.790377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.790402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.790597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.790623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.790872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.790906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.680 [2024-07-15 17:47:41.791079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.680 [2024-07-15 17:47:41.791104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.680 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.791290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.791316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.791480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.791506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Write completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 Read completed with error (sct=0, sc=8) 00:24:46.681 starting I/O failed 00:24:46.681 [2024-07-15 17:47:41.791854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:46.681 [2024-07-15 17:47:41.792097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.792136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.792320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.792348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.792569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.792613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.792861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.792934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.793103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.793129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.793350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.793392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.793607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.793650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.793825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.793850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.794026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.794052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.794250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.794275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.794446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.794473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.794694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.794737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.794884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.795100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.795125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.795339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.795367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.795623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.795648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.795782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.795807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.795975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.796014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.796209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.796239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.796416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.796445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.796702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.796728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.796935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.796962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.797103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.797131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.797311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.797336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.797561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.797621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.797816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.797842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.798033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.798072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.798280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.798319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.798516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.798546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.798854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.798929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.799110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.799135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.799319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.799347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.799652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.799703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.799884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.799915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.800082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.800106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.800275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.800303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.800617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.800668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.800847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.800875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.681 qpair failed and we were unable to recover it. 00:24:46.681 [2024-07-15 17:47:41.801070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.681 [2024-07-15 17:47:41.801094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.801262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.801288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.801445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.801475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.801721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.801773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.801966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.801991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.802152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.802177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.802332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.802362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.802762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.802816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.802973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.802999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.803189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.803216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.803437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.803486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.803641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.803669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.803871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.803904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.804084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.804111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.804299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.804327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.804628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.804684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.804881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.804906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.805074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.805099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.805241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.805283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.805596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.805654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.805836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.805864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.806033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.806058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.806208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.806233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.806374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.806400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.806592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.806620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.806826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.806854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.807090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.807115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.807323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.807348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.807542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.807567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.807778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.807805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.807989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.808014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.808162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.808188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.808374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.808402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.808584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.808612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.808819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.808847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.809030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.809067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.809263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.809291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.809499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.809524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.809700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.809727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.809912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.809953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.810124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.810149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.810331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.810486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.810510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.682 [2024-07-15 17:47:41.810649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.682 [2024-07-15 17:47:41.810673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.682 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.810824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.810866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.811090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.811277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.811302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.811447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.811472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.811640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.811665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.811824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.811849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.812919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.812944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.813154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.813181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.813362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.813391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.813569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.813595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.813805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.813833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.814059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.814085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.814249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.814274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.814461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.814489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.814649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.814674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.814860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.814891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.815077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.815105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.815264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.815292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.815471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.815496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.815683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.815708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.815866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.815899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.816046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.816071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.816261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.816285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.816430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.816472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.816658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.816683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.816850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.816874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.817032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.817061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.817225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.817250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.817419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.817447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.817621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.817650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.968 [2024-07-15 17:47:41.817834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.968 [2024-07-15 17:47:41.817858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.968 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.818030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.818055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.818242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.818267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.818427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.818452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.818624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.818652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.818839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.818864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.819084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.819270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.819458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.819665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.819833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.819998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.820172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.820344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.820529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.820696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.820887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.820912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.821069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.821094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.821262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.821286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.821413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.821456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.821644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.821669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.821809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.821833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.822043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.822223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.822422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.822606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.822823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.822984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.823010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.823199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.823224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.823385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.823410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.823581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.823606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.823773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.823799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.824014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.824040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.824238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.824263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.824385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.824610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.824638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.824817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.824846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.825050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.825076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.825265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.825289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.825451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.825477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.969 [2024-07-15 17:47:41.825644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.969 [2024-07-15 17:47:41.825669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.969 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.825849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.825884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.826067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.826092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.826255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.826280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.826418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.826442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.826604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.826630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.826841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.826869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.827090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.827118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.827270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.827295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.827463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.827487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.827632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.827657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.827858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.827889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.828049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.828078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.828300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.828327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.828516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.828540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.828719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.828747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.828902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.828930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.829121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.829147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.829307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.829331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.829510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.829537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.829747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.829771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.829937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.829963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.830095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.830119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.830262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.830287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.830474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.830501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.830696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.830721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.830860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.830893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.831036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.831061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.831243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.831270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.831455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.831480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.831708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.831733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.831914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.832137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.832162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.832305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.832330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.832462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.832486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.832651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.832678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.832858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.832897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.833078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.833107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.833293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.833318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.833465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.833491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.833654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.833681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.970 [2024-07-15 17:47:41.833862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.970 [2024-07-15 17:47:41.833894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.970 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.834108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.834135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.834327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.834351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.834516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.834540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.834720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.834747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.834931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.834959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.835117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.835142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.835306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.835329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.835546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.835570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.835748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.835773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.835917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.835942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.836131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.836156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.836358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.836382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.836594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.836620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.836764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.836792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.836982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.837008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.837155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.837182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.837360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.837387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.837569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.837593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.837772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.837799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.838030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.838214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.838440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.838627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.838795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.838999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.839025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.839195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.839220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.839409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.839433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.839620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.839647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.839826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.839853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.840028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.840052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.840219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.840253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.840498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.840548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.840710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.840735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.840923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.840949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.841113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.841143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.841298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.841322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.841453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.841479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.841660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.841688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.841902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.841927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.842069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.971 qpair failed and we were unable to recover it. 00:24:46.971 [2024-07-15 17:47:41.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.971 [2024-07-15 17:47:41.842287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.842459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.842484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.842648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.842673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.842860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.842891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.843953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.843978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.844107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.844131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.844301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.844346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.844517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.844545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.844756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.844781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.844946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.845154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.845181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.845363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.845387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.845565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.845594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.845748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.845775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.845927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.845953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.846108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.846132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.846352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.846395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.846561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.846587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.846800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.846829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.847025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.847052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.847223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.847248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.847405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.847434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.847674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.847720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.847935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.847961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.848154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.848183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.848487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.848535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.848716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.848741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.848913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.848938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.849102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.972 [2024-07-15 17:47:41.849127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.972 qpair failed and we were unable to recover it. 00:24:46.972 [2024-07-15 17:47:41.849274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.849305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.849513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.849541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.849696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.849724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.849909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.849936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.850102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.850126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.850270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.850297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.850466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.850491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.850688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.850714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.850924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.850953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.851111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.851136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.851314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.851342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.851574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.851599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.851742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.851767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.851937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.851963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.852168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.852193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.852386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.852411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.852547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.852731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.852756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.852903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.852930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.853099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.853125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.853299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.853328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.853505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.853530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.853687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.853715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.853899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.853930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.854091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.854116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.854311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.854336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.854500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.854542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.854807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.854832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.854975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.855185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.855345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.855500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.855695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.855859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.855891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.856057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.856083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.856248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.856274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.856413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.856438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.856622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.856652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.856866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.856902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.857061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.857086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.857254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.973 [2024-07-15 17:47:41.857280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.973 qpair failed and we were unable to recover it. 00:24:46.973 [2024-07-15 17:47:41.857479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.857509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.857667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.857692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.857855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.857885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.858081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.858106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.858247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.858272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.858431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.858473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.858786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.858839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.859029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.859055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.859269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.859297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.859550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.859599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.859782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.859807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.859993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.860021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.860179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.860206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.860396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.860421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.860608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.860637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.860845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.860873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.861047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.861073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.861280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.861308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.861560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.861607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.861817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.861842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.862006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.862031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.862219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.862247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.862438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.862464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.862630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.862655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.862815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.862840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.863031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.863215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.863450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.863640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.863850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.863995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.864020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.864186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.864210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.864399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.864424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.864562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.864605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.864798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.864823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.864996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.865021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.865179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.865203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.865335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.865360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.865568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.865595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.865765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.865793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.865987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.974 [2024-07-15 17:47:41.866011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.974 qpair failed and we were unable to recover it. 00:24:46.974 [2024-07-15 17:47:41.866173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.866198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.866366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.866391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.866589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.866613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.866794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.866820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.867033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.867058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.867219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.867245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.867442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.867470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.867648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.867675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.867857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.867889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.868049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.868074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.868254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.868281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.868499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.868524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.868714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.868741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.868918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.868945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.869113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.869138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.869273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.869299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.869546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.869595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.869803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.869830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.869993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.870019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.870155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.870179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.870409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.870433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.870624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.870651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.870834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.870861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.871046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.871071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.871268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.871296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.871506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.871535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.871676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.871702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.871890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.871933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.872098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.872122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.872288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.872312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.872480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.872504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.872698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.872725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.872914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.872939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.873107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.873131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.873339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.873364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.873506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.873530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.873682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.873711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.873897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.873926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.874103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.874128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.874339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.874365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.874518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.975 [2024-07-15 17:47:41.874547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.975 qpair failed and we were unable to recover it. 00:24:46.975 [2024-07-15 17:47:41.874733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.874759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.874899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.874925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.875139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.875163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.875298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.875322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.875541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.875568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.875727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.875754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.875942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.875967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.876108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.876132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.876262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.876286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.876451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.876476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.876658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.876685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.876870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.876905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.877071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.877262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.877286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.877482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.877509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.877719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.877743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.877931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.877960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.878114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.878141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.878299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.878323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.878481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.878506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.878718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.878745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.878935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.878961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.879100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.879124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.879318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.879342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.879473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.879502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.879702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.879726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.879939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.879965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.880127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.880151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.880289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.880333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.880515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.880543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.880709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.880734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.880918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.880946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.881147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.881358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.881383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.976 qpair failed and we were unable to recover it. 00:24:46.976 [2024-07-15 17:47:41.881524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.976 [2024-07-15 17:47:41.881549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.881684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.881708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.881846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.881869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.882069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.882307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.882519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.882677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.882847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.882997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.883022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.883186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.883210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.883375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.883400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.883563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.883588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.883766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.883794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.884014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.884039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.884177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.884201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.884363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.884405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.884580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.884608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.884804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.884828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.885011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.885036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.885187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.885215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.885403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.885428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.885615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.885642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.885827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.885866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.886082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.886107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.886290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.886316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.886487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.886514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.886725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.886750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.886936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.886964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.887137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.887164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.887376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.887400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.887577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.887608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.887812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.887838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.888011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.888036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.888208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.888235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.888411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.888438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.888620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.888644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.888831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.888858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.889058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.889086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.889253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.889279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.889493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.889521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.889679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.977 [2024-07-15 17:47:41.889706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.977 qpair failed and we were unable to recover it. 00:24:46.977 [2024-07-15 17:47:41.889908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.889932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.890088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.890117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.890268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.890296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.890493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.890518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.890659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.890683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.890869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.890900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.891130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.891155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.891374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.891399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.891543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.891568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.891780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.891806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.891965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.891990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.892190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.892217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.892368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.892393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.892579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.892608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.892787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.892815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.893002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.893027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.893255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.893433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.893460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.893652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.893676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.893809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.893833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.894024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.894051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.894218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.894242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.894415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.894443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.894621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.894648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.894809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.894833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.895019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.895044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.895202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.895229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.895441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.895466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.895654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.895683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.895893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.895927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.896105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.896131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.896344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.896371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.896549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.896576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.896750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.896774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.896902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.896944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.897119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.897147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.897306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.897332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.897514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.897542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.897745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.897773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.897959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.978 [2024-07-15 17:47:41.897984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.978 qpair failed and we were unable to recover it. 00:24:46.978 [2024-07-15 17:47:41.898168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.898195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.898376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.898404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.898590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.898616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.898807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.898836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.899027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.899055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.899273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.899298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.899488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.899693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.899720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.899907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.899934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.900082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.900109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.900290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.900317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.900531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.900556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.900741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.900769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.900946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.900972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.901135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.901160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.901304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.901328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.901478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.901503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.901666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.901691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.901823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.901848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.902019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.902044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.902236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.902260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.902423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.902450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.902625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.902652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.902838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.902863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.903027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.903056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.903201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.903228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.903414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.903439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.903652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.903679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.903834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.903861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.904031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.904061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.904223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.904247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.904384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.904410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.904604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.904629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.904818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.904846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.905033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.905062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.905225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.905250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.905413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.905439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.905603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.905631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.905811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.905835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.906010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.906036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.906195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.979 [2024-07-15 17:47:41.906220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.979 qpair failed and we were unable to recover it. 00:24:46.979 [2024-07-15 17:47:41.906384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.906569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.906596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.906779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.906807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.906998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.907024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.907184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.907209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.907421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.907448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.907612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.907638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.907851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.907885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.908049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.908243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.908268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.908458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.908484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.908658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.908900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.908925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.909091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.909116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.909325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.909353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.909583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.909608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.909770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.909943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.909970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.910181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.910206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.910409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.910436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.910616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.910643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.910845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.910870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.911066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.911093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.911297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.911324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.911484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.911510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.911677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.911702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.911868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.911899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.912056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.912081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.912245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.912277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.912454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.912481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.912639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.912664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.912822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.912864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.913065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.913093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.913257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.913282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.913462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.913489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.913671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.980 [2024-07-15 17:47:41.913698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.980 qpair failed and we were unable to recover it. 00:24:46.980 [2024-07-15 17:47:41.913883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.913910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.914118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.914145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.914353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.914380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.914540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.914566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.914755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.914782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.914939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.914968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.915153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.915177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.915361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.915388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.915575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.915602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.915797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.915989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.916014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.916196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.916223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.916408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.916431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.916576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.916601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.916811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.916839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.917034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.917060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.917225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.917251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.917430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.917457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.917644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.917669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.917866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.917917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.918109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.918134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.918325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.918350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.918519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.918543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.918721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.918747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.918929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.918953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.919140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.919169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.919349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.919376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.919559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.919583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.919763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.919789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.920030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.920236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.920427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.920619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.920805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.920990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.921018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.921199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.921227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.921415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.921439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.921653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.921680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.921860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.981 [2024-07-15 17:47:41.921892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.981 qpair failed and we were unable to recover it. 00:24:46.981 [2024-07-15 17:47:41.922082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.922105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.922237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.922262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.922479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.922506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.922684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.922709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.922888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.922915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.923094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.923121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.923275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.923300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.923515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.923543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.923727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.923756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.923947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.923972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.924128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.924309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.924336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.924526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.924552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.924770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.924797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.924976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.925003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.925161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.925359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.925383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.925544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.925570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.925751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.925778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.925996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.926164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.926353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.926561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.926735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.926948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.927160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.927187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.927388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.927416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.927604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.927630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.927815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.927843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.928010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.928038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.928231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.928417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.928441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.928626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.928653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.928820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.928849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.929047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.929072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.929264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.929292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.929470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.929494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.929711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.929737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.929921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.929948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.930167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.930192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.982 [2024-07-15 17:47:41.930406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.982 [2024-07-15 17:47:41.930434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.982 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.930620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.930647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.930839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.930864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.931059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.931087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.931263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.931290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.931450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.931474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.931685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.931712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.931870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.931905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.932119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.932144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.932307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.932336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.932540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.932567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.932726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.932749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.932928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.932957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.933141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.933168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.933345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.933369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.933552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.933579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.933791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.933819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.934018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.934043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.934224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.934251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.934406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.934433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.934617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.934642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.934827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.935046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.935072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.935239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.935263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.935450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.935476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.935654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.935681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.935864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.935894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.936046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.936072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.936243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.936267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.936433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.936457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.936639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.936667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.936852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.936886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.937041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.937065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.937270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.937304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.937489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.937516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.937723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.937748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.937907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.937935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.938143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.938170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.938383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.938408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.938544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.938584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.938770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.938796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.983 [2024-07-15 17:47:41.938962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.983 [2024-07-15 17:47:41.938989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.983 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.939199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.939227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.939391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.939416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.939606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.939630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.939814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.939840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.940025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.940053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.940223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.940248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.940436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.940461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.940643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.940671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.940836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.940861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.941068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.941096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.941304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.941331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.941485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.941511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.941695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.941723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.941900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.941928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.942112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.942137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.942316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.942343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.942527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.942556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.942737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.942765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.942982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.943151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.943324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.943522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.943700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.943905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.943932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.944109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.944138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.944319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.944346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.944519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.944544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.944753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.944780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.944962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.944990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.945195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.945223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.945399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.945424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.945559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.945588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.945774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.945798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.945984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.946012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.946229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.946254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.946469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.946497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.946677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.946704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.946888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.946916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.947124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.947149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.947324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.984 [2024-07-15 17:47:41.947348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.984 qpair failed and we were unable to recover it. 00:24:46.984 [2024-07-15 17:47:41.947482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.947646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.947686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.947897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.947923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.948075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.948101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.948282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.948308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.948454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.948481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.948662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.948687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.948902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.948930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.949109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.949136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.949341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.949368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.949553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.949578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.949768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.949795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.949950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.949979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.950120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.950148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.950338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.950363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.950516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.950543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.950761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.950786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.950919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.950944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.951108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.951133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.951315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.951342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.951526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.951553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.951725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.951751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.951939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.951965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.952107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.952131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.952302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.952344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.952495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.952523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.952706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.952731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.952912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.952940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.953094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.953121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.953276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.953302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.953489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.953514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.953673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.953707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.953892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.953921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.954074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.954102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.954293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.954317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.954481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.985 [2024-07-15 17:47:41.954505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.985 qpair failed and we were unable to recover it. 00:24:46.985 [2024-07-15 17:47:41.954686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.954713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.954892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.954921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.955105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.955129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.955298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.955322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.955483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.955507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.955649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.955673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.955855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.955888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.956075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.956099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.956289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.956317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.956481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.956509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.956664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.956688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.956907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.956935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.957090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.957118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.957320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.957347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.957550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.957575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.957763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.957790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.957966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.957994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.958171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.958199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.958384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.958616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.958643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.958830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.958856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.958993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.959034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.959254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.959279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.959502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.959530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.959715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.959744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.959959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.959985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.960123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.960148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.960364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.960392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.960606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.960631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.960763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.960958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.960984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.961142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.961171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.961377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.961405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.961608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.961635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.961811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.961836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.962008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.962033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.962223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.962251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.962434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.962462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.962623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.962647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.962786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.963029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.986 [2024-07-15 17:47:41.963057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.986 qpair failed and we were unable to recover it. 00:24:46.986 [2024-07-15 17:47:41.963219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.963246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.963431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.963455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.963609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.963637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.963784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.963811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.963992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.964022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.964207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.964232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.964410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.964439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.964620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.964650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.964870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.964908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.965081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.965298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.965325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.965504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.965533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.965704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.965732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.965920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.965968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.966160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.966201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.966351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.966378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.966537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.966564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.966719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.966746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.966960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.966989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.967193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.967220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.967406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.967431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.967621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.967650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.967837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.967864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.968016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.968043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.968225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.968252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.968399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.968423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.968602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.968631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.968840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.968865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.969053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.969081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.969271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.969296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.969484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.969511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.969716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.969744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.969947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.969975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.970163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.970188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.970326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.970351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.970499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.970523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.970717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.970744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.970914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.970939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.971134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.971159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.971348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.971375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.987 [2024-07-15 17:47:41.971533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.987 [2024-07-15 17:47:41.971559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.987 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.971718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.971742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.971905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.971948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.972126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.972153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.972298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.972325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.972518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.972543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.972730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.972757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.972933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.972961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.973159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.973183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.973374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.973398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.973541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.973565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.973697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.973723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.973910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.973938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.974150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.974175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.974354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.974380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.974559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.974585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.974788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.974815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.974977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.975143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.975358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.975537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.975751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.975958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.976201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.976228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.976411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.976437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.976616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.976639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.976793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.976820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.976973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.977001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.977178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.977204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.977389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.977414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.977598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.977626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.977779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.977806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.977986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.978014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.978177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.978202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.978338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.978362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.978577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.978605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.978800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.978825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.979033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.979059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.979237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.979264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.979482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.979507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.979695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.979721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.988 qpair failed and we were unable to recover it. 00:24:46.988 [2024-07-15 17:47:41.979891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.988 [2024-07-15 17:47:41.979916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.980094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.980122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.980327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.980355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.980508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.980536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.980724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.980750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.980967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.980996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.981150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.981177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.981327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.981355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.981538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.981562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.981785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.981812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.982009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.982037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.982245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.982272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.982458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.982481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.982637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.982665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.982891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.983109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.983133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.983306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.983332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.983478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.983503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.983641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.983833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.983858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.984029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.984057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.984246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.984450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.984477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.984627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.984655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.984900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.984944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.985145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.985189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.985347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.985375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.985584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.985611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.985799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.985823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.985985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.986190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.986392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.986582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.986748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.986968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.986996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.987175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.987203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.989 qpair failed and we were unable to recover it. 00:24:46.989 [2024-07-15 17:47:41.987417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.989 [2024-07-15 17:47:41.987442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.987625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.987651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.987834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.987858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.988023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.988048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.988211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.988235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.988423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.988450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.988629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.988657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.988844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.988870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.989084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.989109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.989242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.989266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.989455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.989481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.989639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.989665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.989847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.989872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.990106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.990134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.990314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.990339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.990497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.990522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.990688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.990711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.990901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.990929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.991119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.991143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.991306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.991333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.991496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.991521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.991683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.991710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.991958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.992138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.992167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.992383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.992412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.992574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.992602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.992778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.992805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.992983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.993010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.993226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.993251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.993413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.993440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.993622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.993649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.993825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.993852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.994039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.994064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.994246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.994272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.994484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.994511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.994683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.994711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.994897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.994923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.995116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.995143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.995354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.995381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.995561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.995587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.995828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.990 [2024-07-15 17:47:41.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.990 qpair failed and we were unable to recover it. 00:24:46.990 [2024-07-15 17:47:41.996021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.996046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.996209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.996255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.996467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.996495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.996656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.996682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.996895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.996923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.997074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.997103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.997309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.997336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.997513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.997538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.997690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.997718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.997863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.997899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.998062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.998284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.998309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.998495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.998524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.998733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.998761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.998947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.998973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.999115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.999140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.999325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.999354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.999528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.999555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.999764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.999790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:41.999962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:41.999987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.000172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.000201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.000394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.000420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.000611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.000635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.000817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.000845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.001923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.001952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.002134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.002162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.002371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.002395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.002571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.002597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.002779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.002806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.003007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.003035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.003198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.003223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.003355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.003397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.003606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.003633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.003851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.003885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.004084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.004109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.004262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.991 [2024-07-15 17:47:42.004290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.991 qpair failed and we were unable to recover it. 00:24:46.991 [2024-07-15 17:47:42.004471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.004503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.004663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.004692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.004856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.004894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.005086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.005109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.005296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.005323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.005505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.005533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.005742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.005767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.005920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.005948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.006128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.006154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.006372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.006397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.006528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.006553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.006758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.006785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.006967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.006994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.007167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.007193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.007357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.007380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.007536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.007563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.007717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.007744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.007915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.007942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.008114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.008139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.008303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.008331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.008505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.008532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.008715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.008741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.008953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.008982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.009189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.009216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.009388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.009414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.009595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.009623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.009809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.009835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.010013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.010038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.010192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.010221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.010449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.010474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.010637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.010662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.010834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.010862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.011026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.011197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.011225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.011384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.011613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.011640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.011828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.011855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.012036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.012063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.012251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.012278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.012455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.012483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.012701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.992 [2024-07-15 17:47:42.012743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.992 qpair failed and we were unable to recover it. 00:24:46.992 [2024-07-15 17:47:42.012941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.012967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.013155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.013179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.013365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.013392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.013540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.013569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.013776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.013804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.013991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.014197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.014384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.014567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.014805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.014972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.014997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.015162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.015203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.015350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.015378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.015592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.015616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.015802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.015829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.016040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.016069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.016248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.016275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.016439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.016464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.016600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.016642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.016826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.016854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.017095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.017135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.017307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.017339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.017534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.017559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.017759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.017994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.018024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.018236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.018261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.018464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.018492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.018677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.018707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.018890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.018920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.019074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.019099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.019279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.019306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.019493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.019521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.019771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.019802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.019994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.020021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.020205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.020233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.020431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.993 [2024-07-15 17:47:42.020459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.993 qpair failed and we were unable to recover it. 00:24:46.993 [2024-07-15 17:47:42.020676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.020702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.020897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.020923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.021143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.021171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.021326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.021356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.021646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.021695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.021852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.021884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.022074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.022103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.022292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.022318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.022499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.022527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.022714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.022740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.022906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.022933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.023125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.023155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.023371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.023401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.023615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.023640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.023834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.023862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.024073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.024116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.024332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.024362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.024538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.024563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.024805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.024857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.025068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.025093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.025251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.025279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.025438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.025462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.025604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.025646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.025821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.025849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.026038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.026063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.026201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.026225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.026406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.026462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.026620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.026647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.026796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.026825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.027050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.027074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.027232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.027259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.027469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.027523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.027775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.027826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.028000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.028213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.028240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.028490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.028539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.028696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.028725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.028989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.029015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.029200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.029228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.029536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.029596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.029811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.994 [2024-07-15 17:47:42.029840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.994 qpair failed and we were unable to recover it. 00:24:46.994 [2024-07-15 17:47:42.030052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.030077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.030389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.030449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.030720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.030749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.030945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.030970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.031162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.031187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.031470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.031519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.031720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.031747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.031942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.031967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.032136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.032161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.032419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.032460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.032760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.032810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.032981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.033006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.033188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.033213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.033416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.033461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.033710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.033762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.033960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.034154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.034178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.034397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.034446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.034691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.034734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.034967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.034996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.035164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.035200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.035399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.035429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.035690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.035740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.035966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.035998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.036164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.036190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.036435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.036483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.036648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.036679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.036892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.036948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.037100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.037127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.037340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.037369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.037558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.037584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.037771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.037800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.037977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.038004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.038215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.038244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.038496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.038543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.038720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.038749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.038946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.038974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.039186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.039215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.039505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.039561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.039779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.039819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.040026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.995 [2024-07-15 17:47:42.040054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.995 qpair failed and we were unable to recover it. 00:24:46.995 [2024-07-15 17:47:42.040217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.040245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.040545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.040584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.040752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.040779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.040991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.041017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.041150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.041193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.041422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.041450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.041609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.041636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.041837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.041862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.042021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.042047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.042264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.042292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.042448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.042476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.042657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.042681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.042864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.042897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.043076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.043292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.043318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.043500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.043525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.043673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.043697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.043866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.043896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.044052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.044077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.044267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.044292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.044423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.044447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.044632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.044657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.044849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.045054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.045079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.045217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.045262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.045507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.045558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.045757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.045783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.045948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.045974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.046192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.046220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.046443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.046509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.046735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.046765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.046954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.046981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.047147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.047173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.047427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.047477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.047701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.047750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.047941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.047968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.048148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.048191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.048426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.048475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.048676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.048726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.048907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.048938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.049086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.996 [2024-07-15 17:47:42.049110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.996 qpair failed and we were unable to recover it. 00:24:46.996 [2024-07-15 17:47:42.049271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.049295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.049456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.049480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.049611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.049636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.049838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.049869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.050076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.050102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.050299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.050327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.050488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.050512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.050695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.050722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.050907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.050949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.051091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.051116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.051313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.051337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.051495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.051519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.051693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.051720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.051901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.051943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.052134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.052159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.052409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.052455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.052700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.052760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.053003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.053031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.053196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.053222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.053438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.053463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.053675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.053705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.053892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.053937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.054091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.054117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.054287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.054313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.054559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.054602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.054764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.054794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.055019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.055203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.055231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.055472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.055520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.055704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.055732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.055891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.055916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.056106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.056131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.056352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.056405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.056589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.056617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.056800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.056957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.056982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.057144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.057184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.057378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.057403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.057554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.997 [2024-07-15 17:47:42.057579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.997 qpair failed and we were unable to recover it. 00:24:46.997 [2024-07-15 17:47:42.057752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.057779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.057972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.057997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.058196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.058237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.058404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.058429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.058596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.058621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.058836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.058863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.059037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.059062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.059224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.059248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.059410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.059435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.059669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.059717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.059858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.059893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.060072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.060096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.060344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.060390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.060633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.060696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.060908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.060954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.061135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.061162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.061395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.061445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.061708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.061736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.061965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.061997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.062170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.062196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.062420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.062474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.062684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.062745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.062947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.062974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.063152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.063177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.063401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.063429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.063624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.063654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.063841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.063868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.064046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.064073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.064250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.064276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.064543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.064593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.064744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.064771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.064958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.065181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.065208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.065430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.065478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.065693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.065720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.065905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.065940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.066080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.066105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.066371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.998 [2024-07-15 17:47:42.066418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.998 qpair failed and we were unable to recover it. 00:24:46.998 [2024-07-15 17:47:42.066599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.066626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.066815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.066840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.066999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.067025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.067215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.067242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.067406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.067434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.067651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.067676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.067869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.067905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.068088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.068112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.068263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.068291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.068472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.068496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.068658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.068685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.068891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.068919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.069087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.069110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.069277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.069302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.069562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.069609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.069763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.069791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.069965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.069990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.070179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.070203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.070359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.070385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.070527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.070554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.070711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.070740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.070929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.070953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.071161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.071189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.071357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.071384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.071588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.071615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.071827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.071852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.072023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.072049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.072202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.072230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.072417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.072449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.072621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.072646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.072845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.072871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.073062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.073089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.073269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.073296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.073482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.073507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.073656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.073684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.073892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.073920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.074076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.074103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.074276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.074301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.074510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.074538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.074692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.074719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:46.999 [2024-07-15 17:47:42.074897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.999 [2024-07-15 17:47:42.074924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:46.999 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.075091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.075116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.075301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.075328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.075486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.075514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.075695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.075722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.075892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.075917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.076097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.000 [2024-07-15 17:47:42.076124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.000 qpair failed and we were unable to recover it. 00:24:47.000 [2024-07-15 17:47:42.076292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.076317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.076482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.076523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.076688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.076713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.076901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.076927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.077127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.077152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.077292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.077316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.077518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.077542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.077729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.077918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.077952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.078140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.078323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.078347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.078484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.078509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.078669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.078883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.078911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.079105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.079129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.079292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.079317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.079475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.079502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.079685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.079713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.079863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.079893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.080037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.080077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.080224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.080251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.080406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.080434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.080603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.080627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.080837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.080865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.081054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.081082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.081254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.282 [2024-07-15 17:47:42.081282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.282 qpair failed and we were unable to recover it. 00:24:47.282 [2024-07-15 17:47:42.081468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.081494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.081651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.081679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.081837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.081866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.082066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.082094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.082298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.082322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.082540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.082564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.082726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.082750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.082937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.083140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.083164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.083325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.083532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.083559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.083716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.083743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.083931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.083956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.084115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.084160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.084364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.084390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.084568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.084594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.084758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.084783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.084919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.084945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.085082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.085106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.085262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.085304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.085462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.085486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.085669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.085695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.085888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.085916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.086100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.086128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.086354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.086379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.086567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.086595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.086778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.086806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.087023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.087049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.087212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.087236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.087439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.087466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.087621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.087648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.283 qpair failed and we were unable to recover it. 00:24:47.283 [2024-07-15 17:47:42.087801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.283 [2024-07-15 17:47:42.087828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.088018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.088044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.088177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.088200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.088394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.088419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.088630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.088656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.088840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.088865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.089094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.089122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.089301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.089328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.089503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.089726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.089750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.089980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.090008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.090185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.090209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.090360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.090385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.090641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.090665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.090844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.090872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.091035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.091061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.091238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.091264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.091449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.091662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.091690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.091909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.091954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.092146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.092184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.092353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.092380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.092578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.092627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.092815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.092844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.093011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.093039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.093221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.093246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.093444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.093478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.093710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.093759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.093911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.093939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.094100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.094124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.094306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.094330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.094577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.094624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.094789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.094816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.095004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.095030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.095239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.095266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.095440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.095487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.095667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.095694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.095874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.284 [2024-07-15 17:47:42.095906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.284 qpair failed and we were unable to recover it. 00:24:47.284 [2024-07-15 17:47:42.096049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.096074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.096269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.096312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.096520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.096550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.096724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.096749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.096979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.097016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.097209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.097236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.097404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.097429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.097569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.097594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.097727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c600e0 is same with the state(5) to be set 00:24:47.285 [2024-07-15 17:47:42.097974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.098005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.098174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.098199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.098366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.098390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.098557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.098583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.098792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.098822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.099037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.099064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.099284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.099315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.099520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.099557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.099781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.099806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.100954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.100985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.101166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.101194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.101369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.101396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.101565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.101591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.101815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.101846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.102017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.102044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.102208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.102237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.102423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.102453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.102647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.102674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.102815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.102840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.103025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.103060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.103233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.103259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.103424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.103449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.103629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.103658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.103838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.285 [2024-07-15 17:47:42.103863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.285 qpair failed and we were unable to recover it. 00:24:47.285 [2024-07-15 17:47:42.104034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.104072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.104239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.104269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.104426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.104456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.104650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.104677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.104836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.104864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.105066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.105092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.105232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.105256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.105421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.105464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.105674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.105698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.105863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.105899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.106088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.106113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.106253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.106277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.106451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.106478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.106665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.106693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.106856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.106893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.107054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.107078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.107251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.107278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.107464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.107488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.107672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.107699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.107890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.107919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.108097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.108122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.108279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.108306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.108517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.108544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.108731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.108756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.108916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.108960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.109122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.109147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.109336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.109360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.109526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.109553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.109739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.109767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.109927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.109956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.110088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.110112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.110302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.110329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.110515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.110539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.110680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.286 [2024-07-15 17:47:42.110705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.286 qpair failed and we were unable to recover it. 00:24:47.286 [2024-07-15 17:47:42.110895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.110923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.111085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.111109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.111262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.111287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.111479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.111507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.111719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.111744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.111945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.111971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.112137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.112163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.112338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.112363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.112529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.112553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.112751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.112777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.112967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.112992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.113180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.113207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.113395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.113423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.113584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.113608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.113787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.113814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.114013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.114038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.114208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.114233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.114414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.114438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.114609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.114637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.114819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.114843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.115053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.115080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.115268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.115297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.115456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.115481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.115665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.115692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.115873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.115908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.287 [2024-07-15 17:47:42.116070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.287 [2024-07-15 17:47:42.116094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.287 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.116297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.116512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.116539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.116750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.116774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.116927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.116955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.117134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.117162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.117319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.117344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.117515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.117541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.117747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.117774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.117962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.117987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.118133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.118161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.118361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.118390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.118572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.118597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.118749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.118776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.118961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.118990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.119150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.119175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.119341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.119368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.119548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.119576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.119817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.119843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.120051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.120249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.120275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.120413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.120437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.120622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.120648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.120800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.120827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.121027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.121053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.121235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.121263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.121410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.121438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.121617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.121642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.121850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.121884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.122098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.122126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.122281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.122306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.122461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.122489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.122679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.122706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.288 [2024-07-15 17:47:42.122859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.288 [2024-07-15 17:47:42.122896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.288 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.123105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.123132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.123294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.123322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.123481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.123506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.123670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.123713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.123901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.123935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.124093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.124118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.124290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.124317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.124474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.124501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.124686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.124710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.124898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.124927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.125090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.125118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.125299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.125324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.125480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.125507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.125693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.125721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.125870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.125902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.126066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.126108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.126258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.126285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.126471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.126495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.126686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.126713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.126865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.126903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.127089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.127113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.127298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.127325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.127485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.127513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.127727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.127751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.127946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.127975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.128127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.128154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.128341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.128365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.128549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.128576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.128746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.128944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.128969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.129107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.129131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.129300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.129348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.129532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.129556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.289 qpair failed and we were unable to recover it. 00:24:47.289 [2024-07-15 17:47:42.129733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.289 [2024-07-15 17:47:42.129760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.129950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.129978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.130178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.130202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.130347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.130372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.130514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.130539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.130728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.130752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.130935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.130963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.131149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.131177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.131332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.131357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.131523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.131547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.131727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.131753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.131941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.131966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.132110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.132135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.132312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.132340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.132551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.132576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.132721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.132745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.132959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.132988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.133154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.133179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.133362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.133390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.133573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.133601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.133756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.133780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.133919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.133943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.134127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.134155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.134311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.134337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.134518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.134545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.134703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.134730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.134919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.134944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.135101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.135128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.135311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.135336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.135495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.135518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.135703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.135728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.135909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.135937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.136155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.136180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.136368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.290 [2024-07-15 17:47:42.136395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.290 qpair failed and we were unable to recover it. 00:24:47.290 [2024-07-15 17:47:42.136538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.136564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.136745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.136769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.136913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.136939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.137127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.137153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.137324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.137350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.137500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.137525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.137691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.137715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.137848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.137873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.138040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.138081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.138253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.138278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.138452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.138477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.138622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.138648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.138816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.138841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.139034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.139058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.139184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.139223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.139420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.139446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.139620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.139647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.139828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.139852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.140031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.140057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.140217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.140244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.140398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.140422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.140635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.140661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.140882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.140909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.141073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.141098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.141289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.141315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.141485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.141512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.141664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.141689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.141882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.141909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.142080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.142107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.142265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.142291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.142498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.142702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.142728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.142934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.142966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.291 [2024-07-15 17:47:42.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.291 [2024-07-15 17:47:42.143137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.291 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.143365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.143392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.143554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.143579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.143760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.143787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.143948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.143975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.144155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.144180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.144361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.144388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.144560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.144746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.144770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.144901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.144944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.145120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.145147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.145328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.145352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.145496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.145520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.145689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.145715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.145904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.146081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.146107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.146310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.146336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.146496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.146521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.146697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.146722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.146897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.146924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.147108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.147262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.147287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.147496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.147522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.147667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.147691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.147868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.147909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.148056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.148083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.148232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.148263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.148402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.148443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.292 qpair failed and we were unable to recover it. 00:24:47.292 [2024-07-15 17:47:42.148638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.292 [2024-07-15 17:47:42.148664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.148845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.148870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.149046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.149246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.149449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.149623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.149812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.149987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.150190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.150356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.150537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.150715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.150906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.150933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.151107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.151132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.151314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.151339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.151530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.151555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.151716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.151740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.151903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.151945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.152133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.152159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.152323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.152347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.152502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.152529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.152670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.152698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.152871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.152905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.153036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.153077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.153248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.153274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.153479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.153507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.153693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.153873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.153907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.154086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.154111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.154295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.154322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.154482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.154511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.154696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.154720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.154852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.154901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.155104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.155131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.155324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.155349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.293 [2024-07-15 17:47:42.155538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.293 [2024-07-15 17:47:42.155563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.293 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.155723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.155749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.155901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.155927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.156052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.156092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.156240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.156266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.156448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.156472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.156638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.156662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.156787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.156812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.157003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.157029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.157189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.157215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.157424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.157451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.157613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.157639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.157824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.157851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.158970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.158996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.159211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.159238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.159425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.159451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.159611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.159637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.159841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.159868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.160063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.160088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.160262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.160288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.160436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.160463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.160644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.160669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.160851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.160884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.161059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.161086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.161238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.161263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.161426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.161466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.161612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.161639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.294 [2024-07-15 17:47:42.161820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.294 [2024-07-15 17:47:42.161845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.294 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.162060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.162086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.162226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.162252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.162403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.162427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.162617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.162642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.162807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.162833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.163022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.163047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.163231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.163258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.163434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.163636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.163661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.163837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.163862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.164052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.164078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.164243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.164268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.164459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.164484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.164697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.164724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.164885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.164911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.165099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.165125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.165255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.165282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.165439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.165464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.165639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.165666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.165844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.165871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.166050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.166075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.166201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.166226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.166438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.166464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.166628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.166653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.166792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.166816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.167025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.167056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.167212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.167236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.167445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.167472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.167636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.167662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.167819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.167843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.168020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.168047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.168197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.168224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.168430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.295 [2024-07-15 17:47:42.168455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.295 qpair failed and we were unable to recover it. 00:24:47.295 [2024-07-15 17:47:42.168627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.168653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.168830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.168855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.169027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.169053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.169234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.169262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.169432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.169458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.169614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.169638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.169850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.169884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.170065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.170276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.170468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.170647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.170840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.170988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.171013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.171171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.171213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.171371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.171395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.171582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.171606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.171802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.171827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.172968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.172994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.173159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.173184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.173347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.173372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.173508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.173534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.173664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.173689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.173852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.173898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.174067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.174092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.174251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.174277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.174451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.174477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.174631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.174656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.174838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.174863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.296 qpair failed and we were unable to recover it. 00:24:47.296 [2024-07-15 17:47:42.175025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.296 [2024-07-15 17:47:42.175050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.175210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.175235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.175429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.175453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.175607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.175633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.175805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.175830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.176018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.176228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.176255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.176437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.176462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.176648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.176673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.176832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.177963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.177989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.178126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.178151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.178291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.178315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.178483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.178508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.178692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.178717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.178857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.178979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.179151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.179316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.179495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.179649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.179841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.179997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.180022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.180199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.180238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.180401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.180434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.180588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.180613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.297 qpair failed and we were unable to recover it. 00:24:47.297 [2024-07-15 17:47:42.180776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.297 [2024-07-15 17:47:42.180801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.181885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.181911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.182048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.182074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.182266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.182291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.182469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.182493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.182636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.182662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.182837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.182862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.183037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.183062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.183227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.183253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.183430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.183454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.183613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.183638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.183829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.183853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.184943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.184968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.185125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.185150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.185330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.185369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.185515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.185545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.185705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.185730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.185900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.185935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.186104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.186128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.186308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.186334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.186492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.186528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.186703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.298 [2024-07-15 17:47:42.186907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.298 [2024-07-15 17:47:42.186932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.298 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.187116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.187142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.187349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.187377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.187555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.187590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.187758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.187784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.187950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.187987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.188144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.188169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.188374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.188558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.188585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.188749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.188775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.188937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.188974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.189147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.189171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.189342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.189368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.189505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.189531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.189673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.189698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.189888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.189914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.190095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.190134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.190311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.190338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.190499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.190525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.190692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.190717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.190848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.190873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.191937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.191963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.192101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.192126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.192254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.192278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.192476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.192501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.192634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.192658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.192799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.192824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.193008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.193037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.193202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.299 [2024-07-15 17:47:42.193227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.299 qpair failed and we were unable to recover it. 00:24:47.299 [2024-07-15 17:47:42.193360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.193385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.193572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.193597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.193757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.193782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.193934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.193960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.194120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.194144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.194312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.194337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.194468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.194493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.194678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.194702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.194836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.194860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.195974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.195999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.196135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.196160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.196313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.196337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.196505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.196529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.196694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.196718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.196886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.196911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.197923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.197952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.198130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.198155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.198291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.198316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.198479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.198504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.198668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.198693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.198916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.198965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.199182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.199212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.199401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.199428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.199651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.199700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.199891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.300 [2024-07-15 17:47:42.199933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.300 qpair failed and we were unable to recover it. 00:24:47.300 [2024-07-15 17:47:42.200076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.200102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.200270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.200296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.200510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.200538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.200723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.200749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.200957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.200983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.201169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.201195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.201407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.201431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.201658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.201710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.201862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.201898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.202049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.202075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.202254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.202282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.202443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.202470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.202681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.202706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.202870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.203072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.203097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.203227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.203252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.203439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.203467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.203710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.203920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.203945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.204120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.204145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.204304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.204332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.204508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.204534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.204695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.204719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.204898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.204923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.205083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.205108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.205251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.205276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.205438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.205690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.205714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.205870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.205901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.206077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.206102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.206267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.206292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.206479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.301 [2024-07-15 17:47:42.206506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.301 qpair failed and we were unable to recover it. 00:24:47.301 [2024-07-15 17:47:42.206687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.206714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.206871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.206901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.207065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.207090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.207248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.207275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.207455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.207481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.207680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.207728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.207967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.207995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.208150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.208176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.208355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.208381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.208589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.208617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.208799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.208825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.208993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.209020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.209184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.209213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.209403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.209428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.209568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.209612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.209790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.209818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.210029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.210055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.210237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.210263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.210404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.210430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.210610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.210635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.210828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.210853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.211054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.211079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.211248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.211273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.211465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.211696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.211721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.211902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.211932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.212100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.212125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.212290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.212316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.212479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.212503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.212710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.212755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.212974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.213000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.213188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.213212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.213415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.213619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.213646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.302 qpair failed and we were unable to recover it. 00:24:47.302 [2024-07-15 17:47:42.213852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.302 [2024-07-15 17:47:42.213882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.214076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.214101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.214286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.214315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.214493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.214518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.214708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.214735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.214894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.214922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.215097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.215122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.215267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.215295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.215472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.215501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.215689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.215715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.215895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.215938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.216100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.216125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.216281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.216306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.216496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.216521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.216749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.216775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.216950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.216976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.217200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.217229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.217380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.217408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.217600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.217625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.217781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.217811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.217989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.218019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.218205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.218231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.218401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.218428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.218635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.218660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.218801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.218825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.219016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.219042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.219213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.219241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.219426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.219451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.219636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.303 [2024-07-15 17:47:42.219663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.303 qpair failed and we were unable to recover it. 00:24:47.303 [2024-07-15 17:47:42.219810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.219837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.220063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.220090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.220235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.220265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.220463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.220492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.220676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.220701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.220893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.220923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.221102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.221130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.221328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.221353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.221537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.221564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.221715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.221743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.221932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.221957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.222107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.222137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.222350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.222375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.222562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.222774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.222804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.222955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.222984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.223173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.223198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.223380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.223408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.223596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.223621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.223808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.223833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.223977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.224133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.224349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.224506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.224731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.224940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.224976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.225137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.225178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.225396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.225421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.225626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.225651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.225805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.225833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.226021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.226046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.226187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.226212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.226366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.304 [2024-07-15 17:47:42.226393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.304 qpair failed and we were unable to recover it. 00:24:47.304 [2024-07-15 17:47:42.226552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.226580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.226753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.226932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.226961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.227138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.227166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.227327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.227353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.227525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.227549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.227716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.227746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.227928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.227954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.228137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.228166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.228359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.228389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.228571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.228598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.228769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.228794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.228978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.229007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.229196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.229221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.229433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.229461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.229644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.229669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.229853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.229884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.230040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.230066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.230201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.230244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.230428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.230453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.230638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.230668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.230853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.230883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.231074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.231099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.231293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.231321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.231475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.231499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.231657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.231682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.231842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.231869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.232059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.232087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.232270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.232295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.232483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.232511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.232698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.232747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.232927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.232954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.233145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.233173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.305 [2024-07-15 17:47:42.233387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.305 [2024-07-15 17:47:42.233412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.305 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.233603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.233628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.233781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.233806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.233984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.234013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.234190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.234215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.234397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.234439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.234590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.234617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.234778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.234805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.235027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.235055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.235236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.235264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.235452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.235477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.235650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.235677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.235862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.235892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.236055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.236080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.236239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.236266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.236445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.236472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.236679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.236708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.236891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.236918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.237093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.237118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.237306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.237331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.237480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.237507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.237691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.237716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.237889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.237915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.238072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.238100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.238239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.238267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.238447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.238471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.238683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.238711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.238888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.238917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.239071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.239095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.239307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.239335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.239543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.239568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.239729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.239754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.239915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.239941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.240106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.240133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.240315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.240340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.240520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.240548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.240736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.240761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.240942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.240968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.306 qpair failed and we were unable to recover it. 00:24:47.306 [2024-07-15 17:47:42.241180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.306 [2024-07-15 17:47:42.241205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.241347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.241372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.241554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.241579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.241774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.241801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.241987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.242015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.242184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.242209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.242378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.242403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.242581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.242609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.242794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.242819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.242988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.243013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.243178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.243205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.243416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.243440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.243621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.243648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.243830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.243859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.244087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.244112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.244268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.244295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.244440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.244470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.244622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.244647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.244829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.244862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.245058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.245087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.245242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.245267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.245452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.245481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.245667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.245694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.245852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.245882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.246266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.246445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.246637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.246822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.246977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.247003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.247175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.247210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.247398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.247425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.247617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.247642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.247783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.307 [2024-07-15 17:47:42.247809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.307 qpair failed and we were unable to recover it. 00:24:47.307 [2024-07-15 17:47:42.248005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.248031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.248245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.248270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.248484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.248511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.248668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.248695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.248847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.248873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.249052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.249077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.249250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.249275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.249405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.249430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.249609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.249637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.249844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.249871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.250060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.250087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.250281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.250309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.250522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.250550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.250736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.250760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.250938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.250966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.251142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.251170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.251321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.251347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.251527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.251555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.251772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.251800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.251959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.251984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.252206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.252234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.252388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.252415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.252566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.308 [2024-07-15 17:47:42.252590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.308 qpair failed and we were unable to recover it. 00:24:47.308 [2024-07-15 17:47:42.252758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.309 [2024-07-15 17:47:42.252783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.309 qpair failed and we were unable to recover it. 00:24:47.309 [2024-07-15 17:47:42.252935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.309 [2024-07-15 17:47:42.252967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.309 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.253137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.253163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.253365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.253392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.253571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.253598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.253803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.253828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.253975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.254000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.254136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.254176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.254332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.254357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.254578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.254606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.254816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.254843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.255032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.255057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.255237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.255264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.255413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.255440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.255599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.255623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.255802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.255830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.256011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.256040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.256249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.256274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.256430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.256457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.256669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.256717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.256937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.256963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.257143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.257170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.257406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.257455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.257622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.257647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.257828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.257855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.258025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.258054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.258262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.258287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.310 [2024-07-15 17:47:42.258475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.310 [2024-07-15 17:47:42.258503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.310 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.258690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.258718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.258904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.258930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.259096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.259121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.259259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.259285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.259452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.259478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.259671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.259699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.259848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.259894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.260060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.260085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.260223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.260266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.260449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.260476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.260666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.260691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.260909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.260937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.261123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.261148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.261334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.261587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.261615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.261799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.261828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.262071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.262254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.262462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.262638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.262789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.262975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.263003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.263194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.263219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.263430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.263458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.263645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.263672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.263824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.263848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.264021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.264047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.264209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.264238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.264395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.264420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.264629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.264656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.264838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.264865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.311 qpair failed and we were unable to recover it. 00:24:47.311 [2024-07-15 17:47:42.265059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.311 [2024-07-15 17:47:42.265084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.265259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.265287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.265469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.265494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.265677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.265702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.265865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.265900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.266964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.266999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.267175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.267203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.267369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.267394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.267532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.267572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.267735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.267761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.267942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.267967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.268094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.268135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.268306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.268331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.268523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.268548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.268685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.268710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.268883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.269107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.269265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.269434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.269612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.269841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.269988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.270171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.270365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.270534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.270699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.270904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.270932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.271140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.312 [2024-07-15 17:47:42.271167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.312 qpair failed and we were unable to recover it. 00:24:47.312 [2024-07-15 17:47:42.271381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.271405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.271589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.271618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.271805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.271833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.272016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.272042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.272248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.272274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.272454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.272482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.272633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.272658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.272821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.272864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.273973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.273997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.274157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.274183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.274374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.274408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.274592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.274618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.274812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.274837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.275943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.275968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.276150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.276175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.276332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.276358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.276496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.276520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.276683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.276709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.276882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.276924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.277096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.277125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.277288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.277313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.277473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.277502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.313 qpair failed and we were unable to recover it. 00:24:47.313 [2024-07-15 17:47:42.277657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.313 [2024-07-15 17:47:42.277685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.277847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.277873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.278080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.278292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.278473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.278638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.278832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.278982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.279008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.279203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.279232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.279441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.279470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.279633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.279659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.279852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.279932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.280107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.280133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.280327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.280353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.280559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.280599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.280811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.280839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.281016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.281042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.281208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.281237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.281387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.281416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.281640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.281666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.281853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.281898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.282091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.282117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.282282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.282307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.282475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.282505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.282692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.282721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.282884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.282911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.314 qpair failed and we were unable to recover it. 00:24:47.314 [2024-07-15 17:47:42.283082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.314 [2024-07-15 17:47:42.283108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.283318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.283349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.283508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.283539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.283720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.283748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.283943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.283969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.284105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.284130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.284293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.284334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.284508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.284536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.284699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.284728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.284897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.284934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.285929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.285956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.286125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.286168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.286348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.286374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.286528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.286556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.286757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.286786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.286942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.286968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.287105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.287131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.287305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.287331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.287499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.287525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.287686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.287726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.287933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.288103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.288128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.288292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.288317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.288486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.288518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.288726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.288753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.288956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.315 [2024-07-15 17:47:42.288981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.315 qpair failed and we were unable to recover it. 00:24:47.315 [2024-07-15 17:47:42.289124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.289151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.289318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.289343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.289536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.289745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.289774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.289938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.289964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.290111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.290138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.290324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.290352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.290506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.290723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.290752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.290907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.290949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.291092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.291125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.291296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.291322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.291491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.291518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.291674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.291709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.291857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.291909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.292083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.292262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.292447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.292630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.292790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.292984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.293011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.293171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.293200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.293378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.293403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.293600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.293630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.293806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.293834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.294033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.294063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.294255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.294285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.294488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.294517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.294678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.294704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.294924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.294953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.295129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.295158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.295357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.295392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.295616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.295646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.295835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.316 [2024-07-15 17:47:42.295861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.316 qpair failed and we were unable to recover it. 00:24:47.316 [2024-07-15 17:47:42.296015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.296044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.296226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.296251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.296470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.296500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.296686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.296721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.296928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.296957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.297161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.297188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.297375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.297400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.297593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.297635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.297814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.297841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.298066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.298257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.298445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.298612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.298801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.298998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.299182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.299393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.299547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.299709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.299945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.300131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.300161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.300340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.300367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.300554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.300583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.300733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.300762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.300958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.300990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.301141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.301169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.301373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.301609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.301635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.301847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.301874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.302053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.302082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.302273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.302308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.302499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.302527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.302703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.317 [2024-07-15 17:47:42.302730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.317 qpair failed and we were unable to recover it. 00:24:47.317 [2024-07-15 17:47:42.302950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.302979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.303206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.303234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.303452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.303487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.303628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.303662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.303884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.303913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.304084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.304112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.304305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.304332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.304521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.304550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.304731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.304760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.304941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.304976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.305152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.305178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.305363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.305391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.305585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.305611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.305769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.305794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.305936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.305967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.306123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.306149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.306363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.306392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.306571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.306601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.306774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.306805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.306969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.306998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.307205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.307238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.307389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.307416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.307585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.307613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.307768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.307800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.308029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.308055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.308228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.308258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.308419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.308447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.308641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.308668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.308847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.308894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.309092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.309126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.309303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.309329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.309543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.309573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.309755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.309783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.318 [2024-07-15 17:47:42.309944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.318 [2024-07-15 17:47:42.309970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.318 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.310165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.310189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.310354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.310590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.310615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.310804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.310831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.311067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.311092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.311234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.311261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.311416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.311443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.311619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.311647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.311828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.311854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.312022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.312048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.312238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.312266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.312449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.312473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.312654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.312681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.312891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.312920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.313139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.313164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.313299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.313323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.313513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.313538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.313700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.313725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.313939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.313967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.314145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.314172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.314381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.314569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.314596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.314772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.314799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.314963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.314989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.315152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.315195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.315395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.315422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.315577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.315607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.315784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.315811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.316018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.316046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.316209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.316234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.316413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.316440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.316593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.319 [2024-07-15 17:47:42.316620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.319 qpair failed and we were unable to recover it. 00:24:47.319 [2024-07-15 17:47:42.316824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.316993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.317020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.317204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.317389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.317415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.317598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.317625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.317805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.317833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.318023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.318049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.318192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.318220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.318435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.318460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.318622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.318647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.318795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.318823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.319956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.319984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.320162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.320189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.320363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.320388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.320568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.320595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.320769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.320796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.321007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.321033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.321188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.321215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.321367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.321395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.321601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.321625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.321808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.321837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.320 [2024-07-15 17:47:42.322021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.320 [2024-07-15 17:47:42.322050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.320 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.322255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.322469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.322498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.322643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.322671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.322837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.322862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.323018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.323045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.323252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.323476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.323501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.323683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.323715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.323869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.324095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.324120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.324317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.324344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.324562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.324587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.324741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.324766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.324937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.324963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.325087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.325112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.325298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.325322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.325460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.325485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.325685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.325712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.325872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.325904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.326070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.326096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.326253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.326282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.326474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.326499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.326681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.326708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.326885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.326913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.327134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.327158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.327344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.327373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.327586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.327614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.327800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.327825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.327997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.328022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.328183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.328212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.328429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.328453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.328652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.328679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.328889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.328928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.329153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.329178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.329369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.329397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.329574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.321 [2024-07-15 17:47:42.329603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.321 qpair failed and we were unable to recover it. 00:24:47.321 [2024-07-15 17:47:42.329784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.329808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.330017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.330046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.330231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.330258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.330468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.330493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.330696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.330723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.330928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.330956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.331114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.331139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.331322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.331350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.331501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.331528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.331677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.331701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.331829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.331855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.332062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.332094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.332317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.332341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.332493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.332520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.332699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.332726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.332923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.332950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.333139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.333167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.333346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.333564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.333589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.333732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.333757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.333925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.333950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.334087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.334113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.334325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.334353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.334497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.334526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.334742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.334767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.334923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.334951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.335133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.335162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.335358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.335384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.335573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.335600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.335776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.335804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.335958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.335983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.336164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.336192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.336384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.336409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.336565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.336590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.336746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.336773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.322 [2024-07-15 17:47:42.336979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.322 [2024-07-15 17:47:42.337008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.322 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.337178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.337203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.337389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.337414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.337595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.337623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.337787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.337811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.337997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.338022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.338212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.338239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.338395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.338421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.338626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.338654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.338832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.338859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.339053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.339078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.339258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.339287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.339465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.339493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.339698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.339723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.339901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.339932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.340103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.340130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.340282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.340311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.340491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.340698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.340727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.340907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.340933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.341122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.341150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.341325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.341352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.341536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.341560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.341736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.341763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.341958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.341983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.342151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.342359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.342387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.342567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.342595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.342781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.342805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.343012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.343040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.343202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.343409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.343435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.343623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.343650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.343855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.343889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.344087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.344112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.323 qpair failed and we were unable to recover it. 00:24:47.323 [2024-07-15 17:47:42.344277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.323 [2024-07-15 17:47:42.344304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.344450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.344477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.344680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.344705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.344889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.344916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.345092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.345119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.345280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.345305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.345490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.345518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.345673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.345702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.345865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.345897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.346059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.346084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.346273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.346301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.346457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.346481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.346660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.346687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.346859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.346894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.347083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.347108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.347317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.347345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.347495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.347522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.347705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.347730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.347941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.347969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.348150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.348179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.348336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.348360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.348486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.348531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.348700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.348727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.348946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.349150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.349177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.349351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.349378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.349561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.349586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.349771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.349799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.349948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.349977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.350149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.350175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.350389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.350417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.350599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.350627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.350827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.350855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.351043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.351068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.351257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.324 [2024-07-15 17:47:42.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.324 qpair failed and we were unable to recover it. 00:24:47.324 [2024-07-15 17:47:42.351458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.351483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.351694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.351721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.351862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.351898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.352075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.352100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.352252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.352279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.352432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.352459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.352613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.352639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.352861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.352902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.353046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.353075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.353240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.353265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.353431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.353456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.353619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.353644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.353837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.353861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.354085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.354113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.354266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.354293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.354454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.354479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.354658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.354685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.354860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.354894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.355083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.355108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.355287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.355315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.355490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.355517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.355699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.355725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.355959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.355988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.356169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.356197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.356415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.356440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.325 [2024-07-15 17:47:42.356603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.325 [2024-07-15 17:47:42.356630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.325 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.356776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.356810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.357927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.357956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.358117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.358142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.358319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.358347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.358517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.358544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.358712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.358740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.358943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.358969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.359108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.359134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.359362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.359386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.359598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.359626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.359838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.359864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.360969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.360997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.361176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.361201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.361395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.361423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.361641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.361665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.361827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.361853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.361991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.362017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.362213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.362240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.362422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.362447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.362629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.362657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.326 qpair failed and we were unable to recover it. 00:24:47.326 [2024-07-15 17:47:42.362866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.326 [2024-07-15 17:47:42.362901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.363064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.363089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.363286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.363315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.363492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.363520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.363706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.363731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.363951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.363980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.364186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.364214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.364382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.364406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.364590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.364618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.364792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.364819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.365017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.365046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.365238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.365267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.365422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.365450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.365601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.365628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.365808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.365836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.366054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.366082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.366279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.366305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.366513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.366541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.366714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.366741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.366905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.366939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.367100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.367127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.367287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.367315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.367499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.367524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.367710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.367738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.367960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.367985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.368145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.368171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.368362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.368389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.368613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.368666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.368850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.368887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.369038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.369063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.369251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.369276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.327 [2024-07-15 17:47:42.369502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.327 [2024-07-15 17:47:42.369526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.327 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.369711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.369738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.369916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.369944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.370111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.370136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.370297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.370487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.370514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.370674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.370698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.370908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.370936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.371113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.371141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.371299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.371324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.371493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.371517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.371678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.371703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.371835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.371861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.372051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.372079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.372253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.372281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.372430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.372455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.372614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.372658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.372841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.372869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.373054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.373079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.373244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.373274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.373440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.373623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.373648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.373856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.374104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.374131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.374299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.374323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.374498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.374526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.374735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.374763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.374972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.374998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.375130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.375170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.375344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.375371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.375553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.375578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.375727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.375754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.375940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.375966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.376137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.376338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.328 [2024-07-15 17:47:42.376366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.328 qpair failed and we were unable to recover it. 00:24:47.328 [2024-07-15 17:47:42.376540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.376567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.376729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.376754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.376925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.376951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.377089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.377116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.377310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.377335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.377506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.377530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.377696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.377737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.377897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.377922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.378088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.378113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.378294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.378479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.378504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.378656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.378683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.378859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.378894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.379069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.379094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.379238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.379265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.379448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.379476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.379643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.379668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.379805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.379831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.380020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.380048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.380198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.380223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.380432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.380460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.380664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.380691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.380893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.380918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.381103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.381132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.381343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.381375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.381545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.381569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.381745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.381772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.381949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.381978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.382135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.382161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.382312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.382354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.382537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.382565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.382785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.329 [2024-07-15 17:47:42.382810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.329 qpair failed and we were unable to recover it. 00:24:47.329 [2024-07-15 17:47:42.382968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.382996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.383175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.383203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.383390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.383414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.383546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.383571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.383737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.383762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.383922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.383947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.384138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.384166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.384338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.384366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.384573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.384598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.384781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.384810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.384994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.385023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.385238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.385445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.385473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.385652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.385679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.385846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.385870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.386016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.386057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.386205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.386233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.386415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.386440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.386627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.386656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.386849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.386883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.387059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.387084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.387255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.387282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.387469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.387493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.387625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.387650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.387864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.387898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.388110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.330 [2024-07-15 17:47:42.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.330 qpair failed and we were unable to recover it. 00:24:47.330 [2024-07-15 17:47:42.388294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.388319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.388494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.388519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.388706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.388731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.388893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.388919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.389134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.389162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.389306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.389333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.389518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.389543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.389726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.389753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.389905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.389933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.390081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.390107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.390287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.390315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.390501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.390528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.390739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.390764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.390951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.390979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.391156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.391184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.391367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.391392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.391598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.391626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.391776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.391804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.391999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.392025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.392192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.392216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.392401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.392429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.392637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.392662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.392801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.392825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.392985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.393011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.393174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.393199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.393416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.393443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.393593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.393621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.393770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.393985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.394014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.394194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.394222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.394426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.394451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.394605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.394633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.394808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.394835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.331 [2024-07-15 17:47:42.395021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.331 [2024-07-15 17:47:42.395050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.331 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.395201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.395411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.395440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.395619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.395644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.395827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.395855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.396015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.396224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.396250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.396434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.396463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.396639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.396667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.396843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.396871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.397043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.397069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.397258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.397285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.397443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.397468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.397648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.397675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.397862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.398072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.398097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.398246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.398273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.398429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.398456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.398641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.398666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.398847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.398875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.399954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.399982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.400160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.400187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.400380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.400404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.616 [2024-07-15 17:47:42.400579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.616 [2024-07-15 17:47:42.400607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.616 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.400781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.400808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.400996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.401022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.401213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.401238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.401398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.401425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.401606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.401631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.401799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.401824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.402902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.402934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.403112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.403139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.403298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.403323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.403509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.403536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.403708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.403736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.403924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.403950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.404137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.404162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.404321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.404348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.404525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.404551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.404731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.404974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.405002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.405181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.405205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.405412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.405439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.405585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.405613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.405808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.405833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.405982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.406171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.406336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.406534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.406705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.406921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.406947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.407134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.407162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.407367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.407394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.407547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.407572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.407756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.407785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.407976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.408004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.408183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.408208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.408373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.408402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.617 [2024-07-15 17:47:42.408606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.617 [2024-07-15 17:47:42.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.617 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.408808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.408833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.408976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.409002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.409191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.409219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.409396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.409420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.409629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.409656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.409807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.409835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.410022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.410047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.410200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.410228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.410413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.410441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.410624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.410860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.410894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.411054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.411086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.411273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.411298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.411458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.411485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.411664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.411873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.411903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.412087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.412115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.412264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.412292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.412477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.412501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.412683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.412710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.412859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.412906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.413093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.413118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.413300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.413328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.413504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.413531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.413712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.413764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.413934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.413961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.414102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.414126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.414333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.414358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.414568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.414596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.414743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.414771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.414925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.414951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.415098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.415123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.415255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.415279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.415444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.415469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.415652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.415679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.415892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.415920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.416103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.416129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.416306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.416334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.416515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.416543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.416737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.416762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.618 qpair failed and we were unable to recover it. 00:24:47.618 [2024-07-15 17:47:42.416953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.618 [2024-07-15 17:47:42.416979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.417174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.417202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.417381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.417614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.417641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.417814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.417842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.418002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.418027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.418209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.418238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.418425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.418450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.418615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.418640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.418799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.418841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.419058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.419087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.419246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.419274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.419453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.419481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.419663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.419687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.419888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.419913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.420101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.420129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.420312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.420337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.420463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.420488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.420632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.420657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.420860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.420897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.421262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.421454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.421620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.421831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.421982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.422011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.422216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.422242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.422430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.422459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.422642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.422670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.422852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.422887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.423060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.423085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.423266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.423458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.423483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.423661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.423689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.423834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.423861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.424065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.424090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.424251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.424276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.424473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.424500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.424689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.424714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.424892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.424920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.619 [2024-07-15 17:47:42.425129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.619 [2024-07-15 17:47:42.425154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.619 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.425341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.425366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.425546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.425573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.425747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.425774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.425951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.425977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.426112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.426155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.426326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.426354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.426539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.426564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.426731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.426756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.426923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.426948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.427083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.427108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.427288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.427320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.427470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.427499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.427680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.427705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.427872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.427904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.428046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.428072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.428261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.428286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.428459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.428629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.428656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.428813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.428840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.429034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.429248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.429449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.429625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.429835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.429996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.430021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.430184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.430209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.430395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.430423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.430582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.430608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.430823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.430851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.431041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.431069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.431234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.431423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.620 [2024-07-15 17:47:42.431448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.620 qpair failed and we were unable to recover it. 00:24:47.620 [2024-07-15 17:47:42.431610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.431638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.431797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.431823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.432015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.432041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.432198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.432227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.432379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.432404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.432594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.432622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.432821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.432849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.433037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.433063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.433252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.433277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.433469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.433498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.433680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.433706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.433887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.433915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.434061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.434088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.434256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.434280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.434460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.434489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.434641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.434669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.434854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.434892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.435037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.435061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.435228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.435276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.435468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.435493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.435634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.435658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.435818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.435845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.436010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.436036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.436218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.436247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.436424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.436452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.436655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.436680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.436835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.436863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.437061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.437089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.437251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.437277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.437464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.437489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.437674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.437701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.437882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.437908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.438109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.438138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.438320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.438347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.438506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.438532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.438721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.438749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.438955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.438983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.439169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.439195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.439350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.439378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.439569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.439593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.439761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.621 [2024-07-15 17:47:42.439786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.621 qpair failed and we were unable to recover it. 00:24:47.621 [2024-07-15 17:47:42.439922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.439948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.440112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.440136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.440295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.440319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.440489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.440514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.440705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.440729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.440921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.440946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.441131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.441158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.441303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.441330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.441489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.441513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.441652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.441695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.441889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.441917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.442105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.442130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.442314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.442341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.442485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.442512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.442697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.442722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.442902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.442930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.443136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.443163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.443369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.443398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.443600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.443627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.443774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.443802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.443954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.443978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.444114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.444155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.444340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.444368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.444552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.444576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.444763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.444791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.444954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.444982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.445166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.445191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.445368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.445396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.445572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.445600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.445782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.445809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.445989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.446014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.446215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.446242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.446404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.446428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.446597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.446621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.446798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.447014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.447038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.447176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.447200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.447413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.447622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.447646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.447807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.447834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.448018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.622 [2024-07-15 17:47:42.448046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.622 qpair failed and we were unable to recover it. 00:24:47.622 [2024-07-15 17:47:42.448207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.448233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.448425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.448453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.448632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.448659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.448828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.448854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.449941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.449966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.450108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.450151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.450327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.450356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.450540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.450565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.450815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.450842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.451054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.451082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.451269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.451294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.451471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.451692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.451720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.451910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.451936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.452106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.452134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.452301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.452328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.452543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.452757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.452784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.452990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.453015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.453204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.453229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.453450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.453477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.453697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.453943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.453969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.454118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.454147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.454351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.454379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.454545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.454571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.454752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.454780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.454956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.454985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.455166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.455191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.455371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.455399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.455537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.455564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.455779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.455804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.455971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.455997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.456171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.456199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.456377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.456403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.623 [2024-07-15 17:47:42.456581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.623 [2024-07-15 17:47:42.456607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.623 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.456805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.456832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.457032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.457058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.457476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.457504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.457691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.457716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.457905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.457933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.458110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.458139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.458338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.458363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.458520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.458547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.458721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.458750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.458929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.459110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.459138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.459314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.459342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.459512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.459536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.459666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.459708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.459961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.459994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.460155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.460180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.460317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.460359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.460539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.460569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.460727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.460752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.460915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.460944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.461124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.461152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.461313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.461340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.461497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.461522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.461712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.461740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.461918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.461944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.462075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.462274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.462301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.462507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.462532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.462720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.462747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.462929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.462971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.463132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.463157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.463373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.463401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.463546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.463573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.463728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.624 [2024-07-15 17:47:42.463753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.624 qpair failed and we were unable to recover it. 00:24:47.624 [2024-07-15 17:47:42.463924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.463950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.464158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.464185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.464361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.464386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.464595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.464623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.464801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.464829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.465022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.465048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.465220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.465247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.465425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.465452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.465615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.465812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.465837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.466006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.466031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.466189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.466214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.466395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.466424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.466579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.466617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.466809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.466833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.467013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.467039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.467235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.467262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.467476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.467501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.467694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.467722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.467890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.467915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.468080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.468110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.468291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.468319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.468470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.468497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.468687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.468711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.468903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.468931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.469078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.469107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.469320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.469345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.469544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.469572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.469766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.469790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.469949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.469975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.470159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.470187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.470345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.470374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.470555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.470580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.470749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.470774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.470944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.470970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.471124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.471309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.471333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.471473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.471498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.471663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.471689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.471872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.471908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.472091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.625 [2024-07-15 17:47:42.472116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.625 qpair failed and we were unable to recover it. 00:24:47.625 [2024-07-15 17:47:42.472299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.472324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.472501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.472529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.472703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.472732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.472895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.472920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.473127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.473155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.473328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.473357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.473543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.473569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.473780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.473808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.473971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.473997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.474166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.474191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.474329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.474355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.474543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.474568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.474756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.474782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.474958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.474986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.475164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.475193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.475401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.475425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.475587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.475614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.475794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.475821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.476015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.476040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.476229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.476262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.476439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.476466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.476628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.476653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.476810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.476852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.477074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.477262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.477435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.477639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.477847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.477984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.478010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.478192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.478221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.478443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.478468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.478624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.478651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.478805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.478834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.479939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.479967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.480182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.480206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.626 [2024-07-15 17:47:42.480368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.626 [2024-07-15 17:47:42.480395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.626 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.480583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.480608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.480795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.480821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.481044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.481072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.481248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.481277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.481464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.481489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.481686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.481714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.481857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.481903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.482071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.482096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.482300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.482328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.482476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.482504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.482693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.482718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.482859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.483070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.483097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.483251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.483277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.483448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.483474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.483636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.483661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.483856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.483887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.484138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.484166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.484341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.484373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.484541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.484577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.484787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.484814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.485002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.485030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.485181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.485206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.485422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.485450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.485654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.485681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.485867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.485899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.486079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.486107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.486296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.486324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.486512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.486537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.486698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.486726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.486897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.486932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.487092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.487117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.487263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.487288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.487446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.487471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.487640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.487665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.487851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.487886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.488044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.488072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.488257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.488421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.488463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.488642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.627 [2024-07-15 17:47:42.488669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.627 qpair failed and we were unable to recover it. 00:24:47.627 [2024-07-15 17:47:42.488859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.488893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.489104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.489132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.489342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.489369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.489549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.489575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.489780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.489989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.490019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.490205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.490230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.490446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.490474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.490657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.490684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.490854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.490893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.491074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.491101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.491287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.491315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.491493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.491518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.491660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.491684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.491843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.491895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.492055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.492080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.492257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.492286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.492431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.492459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.492613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.492638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.492819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.492847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.493032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.493060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.493219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.493245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.493412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.493438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.493622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.493650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.493841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.493866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.494008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.494033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.494219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.494244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.494422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.494447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.494644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.494671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.494842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.494869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.495041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.495066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.495210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.495234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.495427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.495452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.495655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.495680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.495829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.495857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.496071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.496099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.496260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.496286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.496492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.496519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.496698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.496723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.496862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.497055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.628 [2024-07-15 17:47:42.497079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.628 qpair failed and we were unable to recover it. 00:24:47.628 [2024-07-15 17:47:42.497247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.497274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.497450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.497475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.497658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.497685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.497905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.497933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.498097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.498316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.498341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.498534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.498562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.498720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.498950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.498978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.499182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.499210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.499396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.499421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.499610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.499637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.499789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.499817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.500033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.500058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.500246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.500274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.500446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.500473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.500636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.500661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.500790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.500815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.501005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.501034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.501222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.501247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.501468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.501495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.501653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.501681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.501885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.501927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.502094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.502119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.502307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.502335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.502497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.502522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.502704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.502733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.502920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.502967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.503127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.503153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.503333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.503370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.503551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.503578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.503734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.503759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.503939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.503968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.504147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.504174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.504360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.629 [2024-07-15 17:47:42.504385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.629 qpair failed and we were unable to recover it. 00:24:47.629 [2024-07-15 17:47:42.504573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.504806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.504834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.505918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.505947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.506132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.506158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.506336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.506368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.506556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.506583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.506763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.506787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.506931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.506959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.507107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.507135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.507320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.507345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.507532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.507560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.507738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.507767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.507980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.508152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.508324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.508522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.508751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.508970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.508996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.509162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.509187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.509339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.509367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.509552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.509580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.509771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.509801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.509986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.510201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.510364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.510606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.510776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.510966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.510992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.511177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.511203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.511390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.511418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.511569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.511593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.511778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.511806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.512017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.512178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.512203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.512365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.512390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.630 [2024-07-15 17:47:42.512572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.630 [2024-07-15 17:47:42.512600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.630 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.512808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.512833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.512970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.512996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.513135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.513338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.513363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.513498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.513523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.513715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.513742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.513924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.513950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.514159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.514188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.514394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.514591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.514615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.514803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.514829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.515061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.515089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.515301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.515326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.515517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.515544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.515718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.515746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.515956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.515981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.516194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.516222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.516374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.516402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.516564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.516590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.516759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.516784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.516974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.516999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.517210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.517234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.517427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.517455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.517631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.517658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.517865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.517900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.518085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.518114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.518302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.518329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.518513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.518538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.518692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.518719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.518923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.518952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.519137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.519162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.519343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.519371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.519518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.519546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.519725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.519752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.519936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.519962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.520135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.520161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.520346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.520371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.520559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.520587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.520772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.520800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.520957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.520982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.631 [2024-07-15 17:47:42.521154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.631 [2024-07-15 17:47:42.521182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.631 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.521387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.521415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.521604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.521629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.521811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.521838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.522058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.522084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.522245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.522269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.522440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.522465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.522624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.522666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.522901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.522930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.523132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.523167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.523344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.523368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.523566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.523593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.523812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.523842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.524013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.524041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.524248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.524280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.524449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.524478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.524661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.524690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.524882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.524910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.525124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.525152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.525354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.525384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.525604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.525630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.525790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.525817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.525982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.526012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.526211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.526238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.526380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.526405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.526614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.526642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.526804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.526840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.527021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.527047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.527258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.527286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.527476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.527501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.527688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.527716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.527937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.527967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.528159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.528186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.528399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.528428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.528615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.528644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.528845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.528872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.529046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.529078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.529270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.529301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.529457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.529489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.529672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.529701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.529890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.529920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.632 [2024-07-15 17:47:42.530094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.632 [2024-07-15 17:47:42.530120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.632 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.530334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.530362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.530542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.530571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.530773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.530799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.530983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.531167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.531375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.531560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.531742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.531955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.531983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.532130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.532179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.532361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.532389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.532571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.532600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.532737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.532764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.532920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.532946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.533137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.533162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.533349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.533379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.533586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.533615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.533769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.533794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.534010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.534240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.534266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.534433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.534458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.534621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.534650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.534803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.534831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.535001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.535028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.535175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.535200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.535392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.535418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.535621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.535647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.535830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.535859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.536039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.536066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.536270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.536296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.536435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.536470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.536619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.536644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.536785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.536813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.537010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.537049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.633 qpair failed and we were unable to recover it. 00:24:47.633 [2024-07-15 17:47:42.537238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.633 [2024-07-15 17:47:42.537267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.537454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.537479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.537711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.537746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.537950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.537979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.538159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.538184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.538363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.538391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.538541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.538569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.538770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.538797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.538971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.539000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.539184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.539213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.539402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.539428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.539586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.539613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.539817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.539851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.540044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.540210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.540399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.540603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.540794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.540973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.541004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.541207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.541234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.541438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.541466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.541626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.541655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.541842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.541869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.542076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.542104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.542274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.542302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.542473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.542499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.542710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.542739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.542902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.542943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.543132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.543158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.543347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.543377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.543537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.543564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.543788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.543814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.544004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.544045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.544247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.544282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.544451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.544476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.544672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.544721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.544930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.544961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.545126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.545152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.545332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.545360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.545578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.545607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.545790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.634 [2024-07-15 17:47:42.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.634 qpair failed and we were unable to recover it. 00:24:47.634 [2024-07-15 17:47:42.546000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.546029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.546205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.546233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.546425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.546451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.546602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.546629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.546802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.546831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.547072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.547099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.547294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.547324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.547507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.547537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.547719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.547744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.547912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.547947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.548173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.548203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.548378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.548562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.548589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.548799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.548828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.549008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.549035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.549248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.549277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.549456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.549489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.549680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.549715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.549883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.549928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.550103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.550131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.550329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.550355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.550540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.550567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.550751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.550780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.550945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.550972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.551158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.551189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.551403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.551432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.551619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.551646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.551826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.551856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.552029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.552057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.552227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.552253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.552398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.552423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.552633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.552663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.552848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.552887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.553039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.553064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.553229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.553255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.553393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.553417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.553590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.553615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.553811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.553850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.554082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.554107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.554328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.554357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.554547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.635 [2024-07-15 17:47:42.554576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.635 qpair failed and we were unable to recover it. 00:24:47.635 [2024-07-15 17:47:42.554734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.554761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.554959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.554988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.555198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.555361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.555397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.555626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.555654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.555865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.555903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.556097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.556123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.556283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.556308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.556503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.556532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.556700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.556725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.556922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.557107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.557136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.557339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.557365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.557534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.557563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.557718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.557758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.557948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.557974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.558183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.558213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.558372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.558401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.558583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.558608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.558766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.558807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.559005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.559035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.559246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.559271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.559433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.559467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.559662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.559690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.559901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.559928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.560110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.560140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.560301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.560526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.560561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.560702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.560727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.560939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.560969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.561131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.561329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.561357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.561562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.561598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.561769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.561794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.561934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.561962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.562119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.562144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.562330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.562354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.562543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.562573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.562730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.562758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.562921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.563130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.636 [2024-07-15 17:47:42.563159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.636 qpair failed and we were unable to recover it. 00:24:47.636 [2024-07-15 17:47:42.563326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.563352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.563520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.563545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.563729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.563758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.563926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.563955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.564130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.564156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.564341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.564369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.564550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.564577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.564724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.564758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.564975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.565016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.565202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.565234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.565417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.565443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.565635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.565663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.565857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.565899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.566070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.566095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.566278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.566307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.566494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.566522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.566735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.566760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.566904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.566930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.567096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.567141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.567324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.567350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.567510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.567539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.567692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.567721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.567899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.567930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.568089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.568118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.568325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.568351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.568516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.568542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.568682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.568711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.568872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.568906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.569077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.569103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.569240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.569266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.569419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.569444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.569604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.569628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.569812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.569841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.570043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.570073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.637 [2024-07-15 17:47:42.570236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.637 [2024-07-15 17:47:42.570262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.637 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.570400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.570425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.570603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.570630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.570838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.570864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.571042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.571072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.571236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.571265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.571452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.571478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.571610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.571637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.571871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.571907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.572077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.572103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.572272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.572308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.572525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.572553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.572713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.572739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.572900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.572930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.573136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.573165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.573317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.573347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.573558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.573739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.573767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.573964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.573991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.574153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.574182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.574339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.574369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.574581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.574608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.574793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.574822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.574969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.574999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.575188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.575213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.575418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.575447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.575594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.575623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.575804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.575829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.575999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.576025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.576169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.576196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.576337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.576363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.576585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.576614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.576790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.576819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.577006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.577032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.577206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.577235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.577386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.577414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.577612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.577639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.577836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.577865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.578085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.578110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.578275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.578300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.578467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.578504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.638 qpair failed and we were unable to recover it. 00:24:47.638 [2024-07-15 17:47:42.578688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.638 [2024-07-15 17:47:42.578716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.578882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.578908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.579092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.579313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.579348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.579529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.579554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.579722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.579754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.579943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.579979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.580137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.580162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.580346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.580386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.580543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.580575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.580772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.580798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.580980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.581009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.581185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.581214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.581387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.581428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.581617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.581662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.581846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.581874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.582091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.582117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.582305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.582486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.582515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.582703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.582729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.582872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.582928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.583096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.583124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.583340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.583366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.583528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.583555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.583749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.583782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.584038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.584233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.584410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.584611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.584818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.584986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.585015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.585192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.585217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.585409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.585438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.585613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.585641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.585811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.585837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.586012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.586039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.586252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.586280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.586441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.586466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.586597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.586622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.586792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.586821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.587066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.639 [2024-07-15 17:47:42.587092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.639 qpair failed and we were unable to recover it. 00:24:47.639 [2024-07-15 17:47:42.587279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.587307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.587496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.587525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.587707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.587742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.587941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.587970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.588112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.588140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.588340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.588367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.588542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.588575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.588732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.588975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.589001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.589164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.589193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.589347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.589375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.589571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.589597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.589815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.589847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.590057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.590086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.590248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.590275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.590459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.590497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.590681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.590710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.590897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.590923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.591093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.591123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.591312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.591347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.591538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.591563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.591728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.591760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.591931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.591958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.592129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.592153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.592294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.592493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.592519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.592659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.592691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.592886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.592915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.593104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.593133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.593293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.593322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.593454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.593498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.593667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.593707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.593898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.593924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.594072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.594102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.594262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.594289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.594481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.594508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.594655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.594680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.594864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.594920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.595108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.595133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.595317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.595346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.640 [2024-07-15 17:47:42.595501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.640 [2024-07-15 17:47:42.595530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.640 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.595710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.595735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.595901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.595931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.596139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.596165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.596318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.596344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.596559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.596588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.596787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.596817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.597009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.597036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.597185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.597213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.597403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.597431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.597582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.597608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.597823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.597860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.598068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.598096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.598264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.598299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.598445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.598487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.598662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.598690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.598897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.598933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.599087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.599114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.599278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.599303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.599497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.599523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.599712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.599740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.599898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.599928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.600117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.600329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.600358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.600516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.600543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.600710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.600737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.600904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.600948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.601148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.601186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.601376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.601401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.601598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.601626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.601816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.601845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.602047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.602074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.602234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.602263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.602443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.602471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.602671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.602698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.602868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.602904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.603059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.603088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.603280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.603306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.603475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.641 [2024-07-15 17:47:42.603500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.641 qpair failed and we were unable to recover it. 00:24:47.641 [2024-07-15 17:47:42.603650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.603676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.603843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.603906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.604160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.604187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.604396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.604424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.604604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.604630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.604830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.604859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.605056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.605085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.605254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.605282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.605471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.605500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.605644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.605671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.605840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.605864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.606044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.606070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.606254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.606284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.606470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.606497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.606650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.606690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.606886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.606915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.607073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.607098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.607275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.607312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.607520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.607548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.607743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.607915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.607945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.608128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.608157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.608352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.608377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.608563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.608593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.608783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.608812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.608972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.608998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.609138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.609164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.609296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.609323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.609509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.609535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.609671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.609722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.609870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.609916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.610083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.610109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.610291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.610319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.610478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.610506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.610692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.610717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.610852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.610896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.611045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.642 [2024-07-15 17:47:42.611071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.642 qpair failed and we were unable to recover it. 00:24:47.642 [2024-07-15 17:47:42.611205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.611230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.611397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.611439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.611623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.611651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.611808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.611839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.612012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.612042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.612242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.612271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.612446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.612472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.612653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.612680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.612839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.612868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.613067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.613103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.613291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.613319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.613473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.613502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.613656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.613681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.613863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.613916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.614110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.614138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.614318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.614344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.614513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.614546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.614694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.614723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.614909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.614935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.615130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.615162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.615365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.615400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.615635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.615660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.615823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.615855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.616042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.616072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.616262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.616304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.616455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.616480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.616666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.616695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.616894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.616930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.617098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.617123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.617297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.617333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.617494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.617521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.617717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.617746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.617894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.617922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.618079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.618293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.618323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.618524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.618550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.618720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.618745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.618910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.618947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.619136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.619170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.619317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.619342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.619487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.643 [2024-07-15 17:47:42.619522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.643 qpair failed and we were unable to recover it. 00:24:47.643 [2024-07-15 17:47:42.619720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.619746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.619894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.619922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.620128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.620157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.620311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.620345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.620505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.620530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.620737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.620766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.620910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.620939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.621130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.621155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.621307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.621479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.621508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.621694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.621719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.621906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.621935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.622087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.622114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.622308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.622334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.622550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.622577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.622745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.622774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.622933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.622966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.623131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.623157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.623338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.623368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.623532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.623556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.623710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.623739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.623927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.623955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.624154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.624179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.624342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.624366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.624550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.624776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.624802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.624965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.624994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.625146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.625186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.625353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.625378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.625536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.625562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.625702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.625744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.625930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.625956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.626132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.626160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.626347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.626374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.626534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.626563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.626715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.626741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.626905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.627077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.627102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.627310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.627340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.627517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.627546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.627762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.644 [2024-07-15 17:47:42.627787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.644 qpair failed and we were unable to recover it. 00:24:47.644 [2024-07-15 17:47:42.627953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.627983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.628165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.628194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.628365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.628395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.628595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.628623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.628790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.628819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.629012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.629037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.629216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.629243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.629427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.629455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.629671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.629697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.629898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.629928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.630115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.630143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.630325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.630359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.630551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.630580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.630760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.630789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.631017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.631042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.631210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.631238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.631419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.631447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.631641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.631670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.631855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.631901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.632060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.632088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.632309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.632336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.632504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.632533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.632711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.632738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.632928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.632953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.633092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.633117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.633352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.633377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.633510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.633535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.633728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.633754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.633932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.633972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.634171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.634197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.634358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.634388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.634562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.634592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.634780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.634805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.634963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.634991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.635175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.635203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.635394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.635419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.635588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.635614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.635762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.635787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.635955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.635982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.636140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.636168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.645 [2024-07-15 17:47:42.636393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.645 [2024-07-15 17:47:42.636419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.645 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.636583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.636608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.636770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.636806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.636993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.637194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.637355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.637537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.637696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.637924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.637954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.638134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.638162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.638342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.638367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.638546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.638573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.638758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.638785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.638975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.639001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.639181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.639209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.639377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.639405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.639588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.639613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.639822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.639849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.640066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.640094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.640279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.640304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.640492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.640521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.640700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.640727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.640894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.640943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.641129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.641172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.641385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.641414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.641570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.641595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.641751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.641776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.641944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.641974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.642141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.642167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.642336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.642360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.642560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.642746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.642771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.642959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.642988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.643167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.643196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.643388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.643413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.646 qpair failed and we were unable to recover it. 00:24:47.646 [2024-07-15 17:47:42.643546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.646 [2024-07-15 17:47:42.643571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.643750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.643777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.643973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.643999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.644186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.644216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.644401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.644429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.644637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.644661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.644842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.644869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.645069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.645104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.645297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.645321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.645478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.645508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.645712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.645915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.645941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.646125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.646154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.646360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.646388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.646596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.646621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.646829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.646856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.647060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.647088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.647276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.647301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.647481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.647509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.647686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.647714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.647971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.647997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.648174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.648199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.648417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.648445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.648618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.648644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.648781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.648824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.648978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.649216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.649396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.649602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.649796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.649954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.649997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.650161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.650190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.650345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.650371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.650537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.650579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.650752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.650780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.650968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.650993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.651132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.651361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.651386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.651550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.651575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.651782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.651810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.651988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.647 [2024-07-15 17:47:42.652016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.647 qpair failed and we were unable to recover it. 00:24:47.647 [2024-07-15 17:47:42.652204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.652229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.652410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.652438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.652582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.652610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.652789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.652814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.653007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.653036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.653186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.653213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.653399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.653428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.653615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.653642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.653812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.653840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.654032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.654058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.654240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.654268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.654451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.654478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.654661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.654686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.654843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.654870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.655053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.655081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.655246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.655271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.655475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.655502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.655680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.655708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.655860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.655893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.656064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.656089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.656286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.656311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.656509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.656534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.656720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.656748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.656922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.656951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.657148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.657331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.657358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.657537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.657565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.657748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.657772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.657954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.657983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.658129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.658157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.658348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.658373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.658540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.658566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.658751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.658779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.658945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.658971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.659105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.659149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.659323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.659351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.659537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.659562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.659745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.659772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.659957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.659987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.660174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.660199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.660347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.648 [2024-07-15 17:47:42.660375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.648 qpair failed and we were unable to recover it. 00:24:47.648 [2024-07-15 17:47:42.660561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.660588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.660746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.660770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.660958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.660986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.661134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.661159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.661348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.661373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.661532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.661565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.661741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.661768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.661948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.662156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.662184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.662367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.662394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.662609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.662634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.662824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.662852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.663009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.663037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.663229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.663254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.663421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.663446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.663631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.663659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.663845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.663873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.664066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.664091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.664307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.664334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.664558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.664583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.664770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.664798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.665015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.665041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.665236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.665260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.665450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.665478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.665629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.665656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.665819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.665844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.666048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.666074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.666229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.666257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.666447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.666472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.666641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.666666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.666852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.666887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.667048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.667073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.667240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.667463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.667490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.667646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.667671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.667849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.667884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.668065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.668093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.668252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.668277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.668409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.668452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.668659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.668687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.668865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.649 [2024-07-15 17:47:42.668898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.649 qpair failed and we were unable to recover it. 00:24:47.649 [2024-07-15 17:47:42.669086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.669114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.669262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.669289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.669441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.669466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.669596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.669636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.669788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.669821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.670018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.670043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.670225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.670255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.670462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.670487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.670666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.670691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.670897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.670926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.671078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.671106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.671298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.671323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.671507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.671534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.671769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.671815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.672018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.672044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.672197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.672226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.672459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.672505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.672692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.672717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.672890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.672915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.673081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.673107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.673234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.673259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.673427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.673452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.673692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.673736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.673931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.673957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.674141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.674169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.674322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.674351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.674561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.674587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.674802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.674830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.675013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.675042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.675204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.675229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.675411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.675440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.675652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.675680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.675892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.675917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.676084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.676109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.650 [2024-07-15 17:47:42.676299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.650 [2024-07-15 17:47:42.676327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.650 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.676513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.676537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.676720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.676749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.676904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.676933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.677127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.677151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.677366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.677393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.677601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.677629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.677819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.678052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.678077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.678239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.678266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.678422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.678453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.678637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.678665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.678855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.678889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.679051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.679078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.679251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.679277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.679441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.679466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.679631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.679658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.679849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.679883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.680063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.680091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.680274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.680298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.680510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.680537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.680703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.680733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.680913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.680939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.681123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.681150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.681301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.681328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.681537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.681562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.681752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.681779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.681982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.682184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.682367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.682576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.682790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.682963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.682991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.683144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.683173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.683347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.683371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.683556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.683584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.683758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.683785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.683958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.683984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.684156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.684198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.684378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.684406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.684592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.684617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.651 [2024-07-15 17:47:42.684765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.651 [2024-07-15 17:47:42.684790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.651 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.684948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.684990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.685201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.685226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.685392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.685416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.685601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.685629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.685806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.685831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.685994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.686161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.686186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.686343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.686368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.686540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.686573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.686747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.686774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.686991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.687016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.687163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.687188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.687398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.687425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.687583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.687608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.687783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.688019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.688047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.688236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.688262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.688423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.688448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.688632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.688659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.688845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.688870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.689089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.689118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.689272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.689492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.689517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.689732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.689759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.689939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.689968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.690126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.690151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.690324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.690349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.690539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.690564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.690771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.690796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.691035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.691278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.691454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.691651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.691837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.691981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.692170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.692360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.692547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.692754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.692961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.692989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.693173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.652 [2024-07-15 17:47:42.693198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.652 qpair failed and we were unable to recover it. 00:24:47.652 [2024-07-15 17:47:42.693385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.693413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.693593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.693621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.693798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.693823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.694013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.694254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.694281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.694444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.694469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.694646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.694674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.694855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.694911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.695097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.695122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.695305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.695332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.695545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.695572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.695726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.695750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.695892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.695934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.696115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.696143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.696322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.696346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.696492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.696516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.696679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.696706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.696861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.696892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.697105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.697132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.697309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.697337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.697546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.697571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.697786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.697811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.697946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.697972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.698112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.698137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.698319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.698347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.698533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.698561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.698724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.698749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.698920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.698949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.699097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.699124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.699282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.699306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.699468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.699509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.699690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.699717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.699899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.699925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.700104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.700132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.700321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.700349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.700529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.700554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.700739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.700768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.700919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.700947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.701120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.701145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.701310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.701335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.701493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.701518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.701672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.701697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.701888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.701916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.702082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.702110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.653 qpair failed and we were unable to recover it. 00:24:47.653 [2024-07-15 17:47:42.702270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.653 [2024-07-15 17:47:42.702297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.702505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.702532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.702703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.702730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.702894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.702923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.703105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.703132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.703276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.703304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.703490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.703516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.703702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.703730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.703956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.703984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.704180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.704205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.704408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.704436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.704639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.704825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.704849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.705017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.705042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.705224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.705252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.705433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.705457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.705639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.705666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.705827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.706075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.706100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.706263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.706290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.706493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.706520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.706737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.706762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.706945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.706974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.707188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.707215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.707393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.707418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.707585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.707610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.707748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.707773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.707934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.707959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.708154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.708181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.708355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.708382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.708593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.708622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.708809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.708837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.709047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.709218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.709453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.709625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.709828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.709994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.710019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.654 qpair failed and we were unable to recover it. 00:24:47.654 [2024-07-15 17:47:42.710174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.654 [2024-07-15 17:47:42.710202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.710413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.710438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.710590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.710619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.710774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.710986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.711011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.711194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.711221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.711440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.711467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.711649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.711674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.711855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.712071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.712098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.712255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.712280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.712488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.712515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.712689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.712717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.712898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.712923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.713106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.713134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.713336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.713363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.713539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.713564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.713715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.713896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.713926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.714114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.714140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.714326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.714354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.714539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.714567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.714721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.714746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.714958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.714987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.715186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.715210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.715371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.715397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.715558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.715584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.715801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.715826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.716013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.716038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.716251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.716279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.716470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.716495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.716686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.716711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.716900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.716932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.717137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.717165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.717351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.717376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.717595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.717623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.717800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.717829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.718023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.718049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.718223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.718251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.718457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.718485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.718646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.718670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.718826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.718851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.719053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.719265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.719289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.719430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.719455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.719642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.719666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.655 qpair failed and we were unable to recover it. 00:24:47.655 [2024-07-15 17:47:42.719869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.655 [2024-07-15 17:47:42.719901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.720090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.720118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.720322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.720350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.720539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.720566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.720754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.720782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.720966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.720991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.721161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.721186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.721368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.721395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.721603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.721785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.721810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.722022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.722256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.722284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.722444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.722469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.722652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.722680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.722854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.722900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.723093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.723118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.723299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.723327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.723502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.723529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.723715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.723740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.723948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.723976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.724130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.724158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.724344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.724369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.724523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.724550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.724700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.724728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.724934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.724960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.725141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.725168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.725354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.725384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.725523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.725549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.725688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.725714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.725887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.725913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.726078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.726103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.726279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.726306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.726472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.726500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.726713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.726738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.726871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.726910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.727053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.727078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.656 [2024-07-15 17:47:42.727242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.656 [2024-07-15 17:47:42.727267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.656 qpair failed and we were unable to recover it. 00:24:47.941 [2024-07-15 17:47:42.727456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.941 [2024-07-15 17:47:42.727485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.941 qpair failed and we were unable to recover it. 00:24:47.941 [2024-07-15 17:47:42.727665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.941 [2024-07-15 17:47:42.727695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.941 qpair failed and we were unable to recover it. 00:24:47.941 [2024-07-15 17:47:42.727889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.941 [2024-07-15 17:47:42.727916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.941 qpair failed and we were unable to recover it. 00:24:47.941 [2024-07-15 17:47:42.728106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.941 [2024-07-15 17:47:42.728134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.941 qpair failed and we were unable to recover it. 00:24:47.941 [2024-07-15 17:47:42.728327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.941 [2024-07-15 17:47:42.728352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.941 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.728515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.728540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.728723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.728752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.728937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.728966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.729118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.729142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.729324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.729352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.729496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.729525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.729680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.729705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.729915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.729944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.730122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.730149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.730332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.730358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.730569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.730746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.730774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.730957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.730983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.731191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.731219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.731405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.731433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.731624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.731649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.731801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.731829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.732042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.732070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.732232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.732257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.732389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.732431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.732620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.732648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.732808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.732833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.733004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.733029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.733214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.733242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.733404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.733432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.733560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.733602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.733809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.733837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.734050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.734226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.734438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.734621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.734797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.734985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.735011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.735177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.735202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.735379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.735406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.735549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.735578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.735787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.735812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.735994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.736022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.736213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.736241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.736398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.942 [2024-07-15 17:47:42.736423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.942 qpair failed and we were unable to recover it. 00:24:47.942 [2024-07-15 17:47:42.736605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.736632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.736836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.736864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.737033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.737058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.737226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.737268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.737452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.737479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.737664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.737689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.737869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.737905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.738089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.738117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.738303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.738510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.738538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.738687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.738714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.738933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.738958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.739141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.739168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.739343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.739370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.739559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.739583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.739768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.739795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.739978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.740194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.740356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.740541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.740754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.740964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.740992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.741204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.741229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.741395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.741419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.741603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.741635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.741809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.741837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.742055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.742079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.742269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.742298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.742469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.742497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.742678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.742703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.742892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.742920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.743094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.743121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.743282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.743307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.743447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.743488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.743667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.743691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.943 [2024-07-15 17:47:42.743885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.943 [2024-07-15 17:47:42.743910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.943 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.744093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.744121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.744326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.744353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.744518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.744543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.744723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.744750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.744937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.744976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.745138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.745163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.745298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.745339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.745526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.745550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.745741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.745766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.745934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.745962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.746138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.746167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.746382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.746406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.746594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.746622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.746764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.746791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.746979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.747005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.747189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.747217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.747370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.747399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.747612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.747637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.747847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.747874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.748033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.748063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.748272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.748297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.748449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.748477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.748630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.748657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.748845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.748869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.749083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.749111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.749319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.749346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.749501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.749525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.749701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.749728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.749911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.749945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.750096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.750121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.750292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.750317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.750482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.750506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.750670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.750694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.750918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.750944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.751086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.751111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.751305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.751522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.751550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.751702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.751729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.751887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.751913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.752097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.944 [2024-07-15 17:47:42.752125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.944 qpair failed and we were unable to recover it. 00:24:47.944 [2024-07-15 17:47:42.752332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.752359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.752546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.752571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.752721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.752748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.752968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.752993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.753153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.753178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.753355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.753382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.753570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.753595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.753759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.753785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.753967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.753995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.754182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.754207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.754339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.754364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.754503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.754545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.754750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.754777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.754969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.754994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.755155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.755180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.755352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.755380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.755589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.755613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.755795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.755823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.755980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.756008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.756170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.756195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.756375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.756403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.756544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.756572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.756785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.756809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.757960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.757989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.758141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.758169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.758355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.758382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.758538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.758563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.758743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.758771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.758952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.758980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.759142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.759168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.759307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.759350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.759556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.759580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.759750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.759775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.759928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.759956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.760106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.760135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.945 qpair failed and we were unable to recover it. 00:24:47.945 [2024-07-15 17:47:42.760321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.945 [2024-07-15 17:47:42.760346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.760553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.760580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.760734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.760762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.760919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.760945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.761135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.761160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.761349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.761377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.761533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.761558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.761714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.761738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.761917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.761946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.762110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.762135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.762344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.762372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.762525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.762554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.762741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.762767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.762932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.762962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.763147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.763175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.763359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.763385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.763561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.763589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.763770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.763797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.763987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.764013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.764195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.764223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.764396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.764425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.764587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.764613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.764752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.764795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.764978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.765006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.765194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.765218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.765379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.765409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.765562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.765590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.765801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.765827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.765995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.766025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.766210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.946 [2024-07-15 17:47:42.766239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.946 qpair failed and we were unable to recover it. 00:24:47.946 [2024-07-15 17:47:42.766428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.766453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.766594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.766619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.766779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.766820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.766989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.767015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.767202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.767226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.767385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.767414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.767587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.767612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.767799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.767826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.768033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.768061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.768225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.768250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.768405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.768451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.768606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.768634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.768866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.768898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.769089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.769116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.769298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.769326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.769483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.769509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.769686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.769714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.769894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.769922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.770084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.770110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.770247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.770273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.770462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.770487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.770648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.770672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.770838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.770863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.771045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.771070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.771210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.771235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.771419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.771447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.771625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.947 [2024-07-15 17:47:42.771654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.947 qpair failed and we were unable to recover it. 00:24:47.947 [2024-07-15 17:47:42.771817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.771842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.772016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.772042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.772179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.772222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.772439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.772464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.772676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.772703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.772930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.772955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.773120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.773146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.773339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.773366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.773546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.773574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.773731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.773756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.773976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.774155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.774350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.774526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.774751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.774936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.774963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.775094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.775138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.775336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.775365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.775556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.775581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.775778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.775808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.775965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.775995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.776158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.776183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.776393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.776422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.776593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.776621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.776832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.776858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.777054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.777091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.948 qpair failed and we were unable to recover it. 00:24:47.948 [2024-07-15 17:47:42.777321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.948 [2024-07-15 17:47:42.777346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.777627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.777678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.777862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.777900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.778087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.778254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.778280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.778488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.778517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.778667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.778696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.778915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.778941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.779109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.779138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.779308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.779335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.779495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.779520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.779668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.779694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.779899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.779929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.780085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.780112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.780337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.780366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.780541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.780570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.780732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.780757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.780944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.780973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.781154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.781196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.781379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.781404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.781597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.781828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.781857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.782051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.782076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.782232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.782268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.782474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.782503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.782693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.782734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.782925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.782965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.783154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.949 [2024-07-15 17:47:42.783366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.949 [2024-07-15 17:47:42.783392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.949 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.783579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.783612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.783814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.783842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.783999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.784023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.784201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.784229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.784382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.784409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.784611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.784638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.784796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.784823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.785061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.785089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.785249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.785274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.785488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.785515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.785686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.785716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.785945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.785971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.786124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.786153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.786306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.786345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.786545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.786569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.786703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.786745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.786953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.786996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.787198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.787226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.787373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.787400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.787573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.787599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.787742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.787767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.787957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.787986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.788199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.788251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.788448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.788474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.788638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.788666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.788864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.788898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.789055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.789082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.950 [2024-07-15 17:47:42.789268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.950 [2024-07-15 17:47:42.789298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.950 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.789590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.789641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.789826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.789852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.790051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.790216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.790404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.790616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.790821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.790981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.791137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.791324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.791505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.791668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.791865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.791903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.792094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.792119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.792304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.792331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.792604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.792659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.792827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.792853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.793000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.793027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.793240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.793268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.793442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.793468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.793653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.793681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.793854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.793888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.794052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.794077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.794210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.794253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.794443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.794494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.794678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.794706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.794861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.794900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.951 [2024-07-15 17:47:42.795081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.951 [2024-07-15 17:47:42.795106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.951 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.795268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.795294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.795450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.795480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.795666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.795692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.795832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.795858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.796046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.796075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.796266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.796291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.796426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.796453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.796638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.796667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.796855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.796885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.797050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.797077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.797287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.797316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.797538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.797588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.797806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.797831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.797968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.797994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.798159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.798202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.798381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.798407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.798541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.798585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.798758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.798786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.798945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.798971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.799114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.799159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.952 [2024-07-15 17:47:42.799350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.952 [2024-07-15 17:47:42.799380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.952 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.799539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.799564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.799722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.799751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.799944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.799978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.800170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.800197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.800389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.800417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.800567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.800595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.800751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.800775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.800920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.800963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.801137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.801164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.801324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.801349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.801530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.801557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.801741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.801768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.801940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.801969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.802163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.802191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.802455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.802484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.802638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.802663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.802808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.802853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.803058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.803102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.803270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.803296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.803461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.803486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.803651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.803682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.803875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.803906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.804124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.804152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.804358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.804413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.804572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.804598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.804810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.804838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.805025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.953 [2024-07-15 17:47:42.805056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.953 qpair failed and we were unable to recover it. 00:24:47.953 [2024-07-15 17:47:42.805251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.805277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.805488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.805516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.805758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.805807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.805969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.805994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.806131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.806186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.806462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.806521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.806691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.806715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.806853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.806889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.807037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.807064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.807249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.807274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.807468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.807496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.807658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.807686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.807845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.807875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.808081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.808108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.808257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.808288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.808501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.808526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.808735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.808948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.808976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.809152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.809177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.809365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.809393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.809545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.809573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.809784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.809813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.809986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.810011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.810200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.810227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.810403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.810429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.810643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.810895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.810925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.954 [2024-07-15 17:47:42.811107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.954 [2024-07-15 17:47:42.811132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.954 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.811320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.811348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.811539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.811596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.811798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.811824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.811991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.812022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.812198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.812241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.812406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.812434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.812577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.812619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.812798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.812828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.812992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.813019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.813151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.813195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.813400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.813448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.813605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.813636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.813826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.813854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.814010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.814038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.814260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.814286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.814476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.814504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.814717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.814755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.814945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.814972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.815179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.815207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.815358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.815386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.815570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.815596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.815781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.815810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.815965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.815994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.816207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.816233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.816386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.816414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.816635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.816687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.816882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.816909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.955 qpair failed and we were unable to recover it. 00:24:47.955 [2024-07-15 17:47:42.817097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.955 [2024-07-15 17:47:42.817126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.817310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.817338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.817546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.817572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.817735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.817763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.817911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.817954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.818141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.818166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.818325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.818354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.818532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.818583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.818769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.818795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.818984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.819202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.819372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.819562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.819729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.819891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.819916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.820103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.820132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.820307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.820335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.820500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.820526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.820686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.820711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.820881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.820926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.821089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.821116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.821267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.821296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.821444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.821472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.821629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.821841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.821874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.822099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.822126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.822294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.822320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.822474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.956 [2024-07-15 17:47:42.822502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.956 qpair failed and we were unable to recover it. 00:24:47.956 [2024-07-15 17:47:42.822647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.822675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.822856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.822892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.823056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.823084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.823257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.823285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.823447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.823472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.823660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.823688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.823866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.823905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.824090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.824116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.824299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.824329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.824546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.824808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.824833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.824973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.824999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.825147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.825190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.825356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.825381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.825599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.825628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.825818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.825846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.826032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.826058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.826243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.826271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.826448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.826477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.826659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.826684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.826871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.826912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.827095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.827123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.827304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.827329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.827519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.827548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.827730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.827758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.827938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.827965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.828136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.828178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.957 [2024-07-15 17:47:42.828384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.957 [2024-07-15 17:47:42.828413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.957 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.828590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.828615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.828797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.828825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.828982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.829011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.829201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.829227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.829444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.829472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.829651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.829679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.829832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.829857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.830047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.830076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.830234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.830268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.830426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.830453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.830632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.830802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.830830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.831004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.831029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.831195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.831221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.831436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.831464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.831656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.831681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.831816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.831843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.832010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.832039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.832230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.832256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.832438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.832466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.958 qpair failed and we were unable to recover it. 00:24:47.958 [2024-07-15 17:47:42.832646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.958 [2024-07-15 17:47:42.832674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.832861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.832892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.833087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.833116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.833296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.833324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.833485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.833511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.833642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.833683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.833833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.833861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.834055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.834260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.834458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.834645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.834810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.834971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.835181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.835348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.835542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.835700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.835917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.835946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.836098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.836126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.836317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.836343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.836501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.836531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.836729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.836757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.836924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.836951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.837089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.837131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.837306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.837335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.837542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.837568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.837727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.837755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.959 qpair failed and we were unable to recover it. 00:24:47.959 [2024-07-15 17:47:42.837937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.959 [2024-07-15 17:47:42.837966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.838145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.838175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.838330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.838360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.838538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.838567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.838731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.838757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.838947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.838977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.839122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.839150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.839338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.839364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.839546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.839574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.839749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.839779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.839959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.839985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.840148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.840190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.840402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.840428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.840619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.840645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.840832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.840860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.841954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.841983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.842163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.842191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.842394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.842420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.842604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.842817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.842844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.843015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.843042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.843229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.843255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.843470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.960 [2024-07-15 17:47:42.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.960 qpair failed and we were unable to recover it. 00:24:47.960 [2024-07-15 17:47:42.843731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.843757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.843943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.843970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.844111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.844136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.844350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.844375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.844556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.844584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.844786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.844814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.845001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.845027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.845181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.845209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.845390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.845418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.845598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.845627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.845800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.845828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.846049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.846213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.846435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.846620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.846820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.846974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.847153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.847365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.847516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.847673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.847832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.847858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.848058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.848084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.848281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.848309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.848517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.848542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.848721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.848749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.848900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.848949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.961 [2024-07-15 17:47:42.849142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.961 [2024-07-15 17:47:42.849168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.961 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.849353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.849381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.849533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.849561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.849717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.849742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.849928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.849957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.850135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.850164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.850326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.850353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.850541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.850570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.850747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.850776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.850961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.850988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.851241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.851269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.851460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.851486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.851672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.851698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.851925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.851954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.852159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.852189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.852372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.852397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.852560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.852604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.852750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.852779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.852936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.852963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.853147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.853175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.853378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.853406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.853586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.853611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.853782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.853810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.854001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.854030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.854194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.854221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.854412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.854438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.854605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.854638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.854825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.854851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.962 [2024-07-15 17:47:42.855004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.962 [2024-07-15 17:47:42.855030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.962 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.855196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.855221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.855368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.855393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.855608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.855636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.855781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.855810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.855995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.856021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.856208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.856236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.856386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.856628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.856653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.856857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.856894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.857078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.857108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.857291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.857317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.857465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.857491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.857702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.857731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.857922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.857952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.858131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.858159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.858310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.858339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.858542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.858571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.858750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.858776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.858944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.858989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.859164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.859193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.859400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.859428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.859612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.859638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.859820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.859849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.859998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.860027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.860240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.860268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.860458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.860483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.860680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.963 [2024-07-15 17:47:42.860709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.963 qpair failed and we were unable to recover it. 00:24:47.963 [2024-07-15 17:47:42.860894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.860923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.861112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.861137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.861328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.861356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.861515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.861545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.861729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.861754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.861968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.862172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.862365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.862560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.862748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.862961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.862991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.863154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.863182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.863370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.863398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.863611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.863637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.863797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.863825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.864046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.864075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.864271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.864296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.864506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.864533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.864713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.864748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.864912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.864939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.865075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.865101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.865249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.865275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.865468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.865494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.865678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.865708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.865899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.865926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.866089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.866114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.866294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.866323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.964 [2024-07-15 17:47:42.866496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.964 [2024-07-15 17:47:42.866525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.964 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.866708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.866900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.866926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.867105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.867130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.867322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.867347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.867535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.867565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.867712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.867740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.867902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.867928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.868141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.868169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.868362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.868389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.868572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.868597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.868771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.868799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.868945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.868973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.869156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.869182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.869324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.869367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.869546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.869575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.869724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.869749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.869886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.869927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.870145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.870173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.870350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.870376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.870573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.870601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.870785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.870810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.965 qpair failed and we were unable to recover it. 00:24:47.965 [2024-07-15 17:47:42.871008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.965 [2024-07-15 17:47:42.871042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.871259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.871291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.871472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.871497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.871664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.871689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.871827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.871854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.872075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.872279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.872442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.872629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.872792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.872979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.873014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.873218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.873266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.873418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.873443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.873578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.873619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.873777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.873807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.873973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.874001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.874143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.874169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.874356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.874381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.874596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.874621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.874801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.874828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.875017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.875044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.875180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.875205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.875392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.875417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.875639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.875686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.875852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.875884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.876084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.876109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.876380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.876430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.966 [2024-07-15 17:47:42.876626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.966 [2024-07-15 17:47:42.876651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.966 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.876840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.876868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.877074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.877099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.877279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.877483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.877511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.877753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.877800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.877990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.878016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.878228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.878256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.878458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.878503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.878691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.878716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.878954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.878979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.879167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.879207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.879363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.879387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.879609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.879636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.879811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.879844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.880017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.880043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.880209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.880234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.880463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.880511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.880698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.880723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.880874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.881047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.881075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.881251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.881443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.881654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.881702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.881905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.881941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.882123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.882151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.882358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.882405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.882594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.882618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.967 qpair failed and we were unable to recover it. 00:24:47.967 [2024-07-15 17:47:42.882761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.967 [2024-07-15 17:47:42.882786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.882945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.882973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.883137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.883161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.883367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.883394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.883576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.883623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.883809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.883833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.883997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.884023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.884203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.884230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.884380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.884404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.884584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.884612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.884790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.884817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.885032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.885057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.885202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.885229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.885432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.885497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.885698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.885726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.885860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.885910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.886124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.886152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.886313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.886339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.886476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.886502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.886750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.886797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.886974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.887187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.887217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.887434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.887485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.887669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.887885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.887914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.888102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.888131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.888342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.888372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.888558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.968 [2024-07-15 17:47:42.888586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.968 qpair failed and we were unable to recover it. 00:24:47.968 [2024-07-15 17:47:42.888763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.888794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.888954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.888980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.889149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.889174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.889332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.889357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.889546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.889571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.889756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.889784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.889951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.889995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.890191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.890218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.890435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.890463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.890682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.890732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.890943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.890971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.891136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.891179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.891432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.891480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.891645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.891670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.891835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.891861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.892050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.892077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.892220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.892246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.892426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.892455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.892668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.892716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.892932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.892957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.893144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.893171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.893422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.893470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.893626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.893650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.893818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.893860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.894072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.894100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.894292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.894317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.894501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.894529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.894701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.969 [2024-07-15 17:47:42.894752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.969 qpair failed and we were unable to recover it. 00:24:47.969 [2024-07-15 17:47:42.894949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.894974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.895109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.895134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.895299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.895324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.895511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.895535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.895731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.895911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.895938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.896132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.896157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.896311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.896352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.896552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.896599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.896806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.896831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.897000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.897030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.897212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.897391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.897415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.897596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.897623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.897810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.897834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.898017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.898042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.898203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.898230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.898432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.898477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.898660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.898684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.898827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.898854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.899038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.899065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.899218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.899243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.899416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.899443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.899682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.899728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.899925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.899951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.900147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.900174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.900324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.900350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.970 qpair failed and we were unable to recover it. 00:24:47.970 [2024-07-15 17:47:42.900567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.970 [2024-07-15 17:47:42.900591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.900802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.900829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.901001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.901030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.901185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.901210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.901388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.901415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.901585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.901630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.901839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.901864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.902043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.902410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.902599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.902819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.902975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.903009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.903184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.903415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.903465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.903651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.903675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.903830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.904062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.904090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.904251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.904276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.904484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.904511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.904702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.904731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.904896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.971 [2024-07-15 17:47:42.904928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.971 qpair failed and we were unable to recover it. 00:24:47.971 [2024-07-15 17:47:42.905153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.905179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.905335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.905367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.905537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.905562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.905693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.905735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.905875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.906099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.906257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.906281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.906433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.906461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.906671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.906696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.906835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.906859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.907068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.907297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.907321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.907475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.907502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.907697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.907738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.907903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.907937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.908136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.908165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.908340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.908367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.908524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.908549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.908763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.908791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.908963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.908991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.909174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.909198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.909377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.909404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.909567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.909591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.909733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.909758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.909937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.909964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.910143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.910171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.910326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.910351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.972 [2024-07-15 17:47:42.910515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.972 [2024-07-15 17:47:42.910558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.972 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.910715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.910744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.910930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.910955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.911138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.911165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.911309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.911336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.911522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.911547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.911733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.911760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.911940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.911967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.912148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.912173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.912351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.912379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.912587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.912614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.912795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.912819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.912981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.913141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.913337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.913496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.913740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.913974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.913999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.914214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.914240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.914410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.914437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.914615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.914640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.914820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.914846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.915919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.973 [2024-07-15 17:47:42.915945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.973 qpair failed and we were unable to recover it. 00:24:47.973 [2024-07-15 17:47:42.916131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.916158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.916314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.916338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.916515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.916541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.916756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.916782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.916954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.916981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.917163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.917189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.917359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.917384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.917527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.917550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.917688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.917728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.917903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.918105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.918129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.918307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.918331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.918470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.918495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.918651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.918679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.918847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.918871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.919087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.919113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.919296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.919322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.919538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.919564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.919710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.919736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.919935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.920145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.920169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.920334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.920359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.920498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.920523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.920687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.920712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.920882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.920907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.921037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.921062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.921227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.921252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.921395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.974 [2024-07-15 17:47:42.921419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.974 qpair failed and we were unable to recover it. 00:24:47.974 [2024-07-15 17:47:42.921551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.921575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.921772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.921797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.921955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.921980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.922174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.922199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.922366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.922391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.922557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.922583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.922772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.922797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.922930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.922955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.923148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.923173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.923337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.923364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.923528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.923553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.923714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.923738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.923935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.923960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.924152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.924177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.924339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.924366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.924557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.924582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.924747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.924772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.924939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.924964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.925156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.925181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.925345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.925370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.925562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.925587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.925724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.925751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.925915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.925941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.926102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.926127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.926293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.926319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.926455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.926485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.926677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.926702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.975 [2024-07-15 17:47:42.926830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.975 [2024-07-15 17:47:42.926855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.975 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.926998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.927190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.927351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.927513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.927695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.927885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.927911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.928969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.928994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.929158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.929184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.929341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.929366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.929504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.929529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.929669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.929695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.929857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.929888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.930901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.930927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.931124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.931150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.931294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.931320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.931485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.931510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.931676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.931702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.976 [2024-07-15 17:47:42.931842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.976 [2024-07-15 17:47:42.931867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.976 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.932946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.932971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.933127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.933292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.933482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.933647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.933819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.933976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.934133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.934346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.934536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.934691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.934856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.934884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.935049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.935075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.935265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.935290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.935477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.935502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.935633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.935657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.977 [2024-07-15 17:47:42.935794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.977 [2024-07-15 17:47:42.935817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.977 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.935978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.936155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.936363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.936528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.936714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.936901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.936926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.937114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.937327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.937490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.937681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.937844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.937993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.938151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.938341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.938514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.938706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.938893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.938919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.939957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.939982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.940145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.940312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.940476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.940664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.940822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.978 [2024-07-15 17:47:42.940987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.978 [2024-07-15 17:47:42.941011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.978 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.941196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.941220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.941363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.941388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.941558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.941583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.941741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.941765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.941925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.941951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.942957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.942983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.943147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.943172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.943341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.943502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.943527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.943715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.943740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.943873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.943901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.944921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.944946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.945137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.945298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.945485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.945652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.945815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.945976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.946001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.946167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.946190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.946358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.979 [2024-07-15 17:47:42.946383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.979 qpair failed and we were unable to recover it. 00:24:47.979 [2024-07-15 17:47:42.946539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.946564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.946696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.946720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.946885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.946911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.947107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.947320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.947487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.947645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.947833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.947997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.948172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.948363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.948571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.948735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.948932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.948958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.949153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.949312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.949495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.949659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.949824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.949989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.950182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.950394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.950554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.950743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.950903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.950928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.951066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.951090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.951250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.951274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.951404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.951429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.951612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.951635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.951798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.980 [2024-07-15 17:47:42.951821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.980 qpair failed and we were unable to recover it. 00:24:47.980 [2024-07-15 17:47:42.952013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.952179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.952370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.952593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.952760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.952957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.952993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.953137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.953161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.953352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.953377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.953538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.953563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.953728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.953753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.953917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.953941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.954130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.954154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.954329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.954354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.954491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.954516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.954678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.954704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.954866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.954896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.955060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.955084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.955275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.955299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.955464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.955492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.955650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.955674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.955833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.955858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.956917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.956941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.957104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.957128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.957286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.957311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.957474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.957499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.957663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.981 [2024-07-15 17:47:42.957687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.981 qpair failed and we were unable to recover it. 00:24:47.981 [2024-07-15 17:47:42.957874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.957904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.958079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.958104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.958266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.958291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.958452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.958477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.958643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.958667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.958832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.958856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.959942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.959967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.960137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.960161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.960324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.960348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.960540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.960564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.960705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.960730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.960865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.960895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.961920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.961945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.962109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.962134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.962273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.962297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.962482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.962507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.962669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.962694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.962831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.962859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.963001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.963026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.963189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.963214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.963369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.963393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.963530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.963554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.982 qpair failed and we were unable to recover it. 00:24:47.982 [2024-07-15 17:47:42.963718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.982 [2024-07-15 17:47:42.963742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.963932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.963957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.964123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.964147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.964306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.964330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.964497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.964521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.964678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.964703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.964893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.964917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.965943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.965969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.966134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.966321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.966504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.966661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.966818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.966991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.967175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.967363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.967546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.967741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.967937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.967962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.968125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.968150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.968307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.983 [2024-07-15 17:47:42.968331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.983 qpair failed and we were unable to recover it. 00:24:47.983 [2024-07-15 17:47:42.968497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.968522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.968679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.968704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.968869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.968899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.969094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.969283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.969453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.969619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.969982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.970163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.970331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.970503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.970659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.970843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.970867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.971916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.972083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.972109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.972247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.972271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.972434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.972460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.972653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.972678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.972838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.972863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.973914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.973940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.974129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.974287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.984 [2024-07-15 17:47:42.974855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.984 qpair failed and we were unable to recover it. 00:24:47.984 [2024-07-15 17:47:42.974996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.975189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.975372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.975560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.975723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.975873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.975903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.976117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.976275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.976441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.976650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.976822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.976983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.977173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.977335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.977533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.977720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.977941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.977965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.978106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.978131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.978297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.978322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.978488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.978512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.978673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.978697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.978885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.978911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.979959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.979983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.980151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.980176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.980332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.980357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.980526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.980550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.980713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.980737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.980882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.980907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.981971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.981997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.982159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.982183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.982347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.982372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.982510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.982535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.982680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.982705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.982852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.982881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.983075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.983264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.983449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.983662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.983993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.984019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.985 qpair failed and we were unable to recover it. 00:24:47.985 [2024-07-15 17:47:42.984158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.985 [2024-07-15 17:47:42.984182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.984340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.984364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.984527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.984552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.984737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.984766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.984910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.984934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.985096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.985120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.985280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.985304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.985443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.985467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.985634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.985657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.985843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.985868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.986898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.986923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.987059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.987083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.987281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.987490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.987514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.987705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.987730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.987892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.987918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.988966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.988991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.989155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.989180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.989333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.989358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.989494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.989518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.989711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.989736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.989906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.989932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.990074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.990097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.990241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.990265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.990458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.990482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.990645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.990670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.990835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.991972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.991998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.992161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.992189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.992329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.992492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.992518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.992679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.992703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.992871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.992902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.993043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.993067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.993257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.993440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.993464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.986 qpair failed and we were unable to recover it. 00:24:47.986 [2024-07-15 17:47:42.993605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.986 [2024-07-15 17:47:42.993630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.993814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.993839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.994890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.994916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.995101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.995261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.995474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.995634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.995849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.995996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.996022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.996215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.996241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.996403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.996427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.996621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.996645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.996808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.996833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.997909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.997934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.998103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.998127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.998296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.998320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.998483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.998506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.998676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.998702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.998897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.998923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.999084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.999109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.999248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.999273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.999437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.999462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.999608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.999638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:42.999782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:42.999809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.000056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.000083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.000250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.000276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.000438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.000463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.000627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.000652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.000811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.000836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.001006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.001032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.001223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.001247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.001389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.001414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.001573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.987 [2024-07-15 17:47:43.001598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.987 qpair failed and we were unable to recover it. 00:24:47.987 [2024-07-15 17:47:43.001755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.001780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.001946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.001972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.002968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.003154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.003180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.003344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.003370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.003612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.003637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.003797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.003821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.003965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.003990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.004179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.004204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.004367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.004391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.004580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.004605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.004743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.004769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.004939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.004964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.005133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.005158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.005311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.005335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.005519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.005677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.005702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.005866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.006970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.006996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.007154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.007183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.007343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.007367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.007606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.007631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.007820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.007845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.008884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.008910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.009099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.009260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.009444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.009636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.009812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.009990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.010016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.010180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.988 [2024-07-15 17:47:43.010206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.988 qpair failed and we were unable to recover it. 00:24:47.988 [2024-07-15 17:47:43.010397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.010557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.010582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.010750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.010774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.010907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.010931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.011125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.011293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.011478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.011667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.011829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.011997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.012216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.012406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.012623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.012785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.012946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.012972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.013140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.013165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.013304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.013329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.013488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.013513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.013676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.013701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.013866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.013895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.014971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.014996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.015131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.015155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.015313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.015337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.015494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.015518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.015708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.015733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.015898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.015923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.016054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.016078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.016266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.016291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.016456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.016481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.016638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.016662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.016831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.016856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.017949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.017974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.018146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.018299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.018512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.018676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.018842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.018983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.019150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.019338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.019501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.019694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.989 [2024-07-15 17:47:43.019855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.989 [2024-07-15 17:47:43.019894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.989 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.020914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.020940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.021155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.021318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.021485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.021673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.021841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.021987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.022183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.022378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.022594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.022757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.022956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.022981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.023169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.023193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.023332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.023357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.023504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.023528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.023672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.023698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.023892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.023928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.024178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.024203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.024379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.024406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.024604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.024630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.024776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.024801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.024983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.025174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.025373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.025558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.025744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.025916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.025952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.026099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.026124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.026313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.026471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.026498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.026637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.026661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.026800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.026829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.027954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.027980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.028119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.028145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.028337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.028364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.028528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.028553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.990 qpair failed and we were unable to recover it. 00:24:47.990 [2024-07-15 17:47:43.028714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.990 [2024-07-15 17:47:43.028741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.028939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.028964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.029106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.029131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.029275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.029300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.029475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.029500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.029646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.029672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.029830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.029855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.030057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.030279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.030469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.030654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.030833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.030980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.031006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.031175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.031200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.031394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.031418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.031578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.031788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.031813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.031976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.032147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.032337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.032506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.032687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.032861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.032892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.033253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.033280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.033443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.033468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.033625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.033651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.033820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.033856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.034907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.034933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.035078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.035103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.035289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.035313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.035451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.035476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.035639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.035665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.035849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.035874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.036893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.036920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.037972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.037998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.038165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.038190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.038347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.038371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.038510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.038724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.038749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.991 [2024-07-15 17:47:43.038886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.991 [2024-07-15 17:47:43.038911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.991 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.039098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.039265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.039453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.039642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.039834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.039979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.040192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.040387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.040571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.040732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.040898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.040925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.041103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.041293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.041459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.041685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.041847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.041989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.042178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.042341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.042536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.042731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.042910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.042937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.043107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.043134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.043292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.043324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.043493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.043521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.043680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.043712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.043902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.043928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.044099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.044125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.044263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.044290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.044457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.044482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.044651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.044679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.044818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.044843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.045961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.045988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.046161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.046474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.046636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.046829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.046985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.047021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.992 qpair failed and we were unable to recover it. 00:24:47.992 [2024-07-15 17:47:43.047177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.992 [2024-07-15 17:47:43.047204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.047334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.047358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.047550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.047579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.047747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.047772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.047956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.047983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.048124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.048149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.048284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.048312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.048463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.048489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.048676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.048701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.048838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.048862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.049897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.049924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.050110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.050307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.050333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.050493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.050524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.050709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.050734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.050929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.050954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.051131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.051158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.051319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.051345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.051489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.051513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.051652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.051676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:47.993 [2024-07-15 17:47:43.051811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.993 [2024-07-15 17:47:43.051846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:47.993 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.052082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.052278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.052453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.052655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.052820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.052969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.053169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.053339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.053531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.053788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.053965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.053991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.054136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.054162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.054295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.054322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.054497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.054524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.054692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.054717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.054854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.054890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.055938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.055963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.056103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.056128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.056299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.056325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.056490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.056516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.056703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.056728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.056898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.056947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.057148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.057187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.057329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.057357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.057507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.057532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.057679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.057705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.057863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.057896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.058948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.058974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.059146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.059171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.059311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.059336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.059507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.059533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.059688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.059714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.059849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.059874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.060087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.060113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.060281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.060308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.060469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.060494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.060631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.060657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.060825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.060850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.061036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.061062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.061208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.061247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.061401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.061842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.061871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.062031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.283 [2024-07-15 17:47:43.062057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.283 qpair failed and we were unable to recover it. 00:24:48.283 [2024-07-15 17:47:43.062203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.062230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.062378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.062404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.062570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.062595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.062759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.062785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.062941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.062968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.063130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.063157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.063302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.063328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.063477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.063503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.063670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.063843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.063869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.064900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.064930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.065080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.065106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.065248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.065274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.065472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.065497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.065661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.065686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.065849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.065875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.066914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.066940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.067082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.067108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.067268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.067294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.067495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.067521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.067666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.067691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.067850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.067884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.068921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.068948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.069126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.069152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.069287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.069313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.069457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.069485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.069623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.069649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.069817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.069843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.070945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.070971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.071146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.071172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.071341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.071367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.071560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.071586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.071769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.071795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.071926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.071957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.072129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.072478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.072672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.072834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.072979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.073165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.073334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.073523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.073689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.073887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.073914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.074049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.074075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.074213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.074240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.074421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.074447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.074587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.074612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.074777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.075842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.075868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.076951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.284 [2024-07-15 17:47:43.076978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.284 qpair failed and we were unable to recover it. 00:24:48.284 [2024-07-15 17:47:43.077143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.077168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.077347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.077373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.077513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.077540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.077683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.077851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.077882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.078951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.078977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.079128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.079158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.079323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.079349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.079518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.079544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.079688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.079714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.079850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.079888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.080081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.080106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.080267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.080293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.080435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.080461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.080625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.080651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.080811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.080836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.081964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.082131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.082156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.082331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.082357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.082496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.082708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.082734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.082873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.082905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.083080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.083106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.083272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.083298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.083460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.083485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.083650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.083675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.083834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.083859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.084069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.084097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.084260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.084286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.084423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.084448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.084619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.084644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.084810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.084836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.085919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.085945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.086076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.086101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.086271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.086296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.086494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.086521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.086687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.086720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.086859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.086890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.087941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.087967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.088102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.088128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.088265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.088290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.088455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.088480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.088852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.088883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.089955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.089983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.090147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.090173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.090304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.090330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.090473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.090499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.090702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.090729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.090895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.090921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.091101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.091290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.091481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.091646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.091814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.091992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.092018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.092187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.092351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.092377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.092564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.285 [2024-07-15 17:47:43.092589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.285 qpair failed and we were unable to recover it. 00:24:48.285 [2024-07-15 17:47:43.092750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.092775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.092925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.092953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.093140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.093307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.093494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.093685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.093852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.093995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.094163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.094352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.094512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.094700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.094893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.094919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.095065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.095090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.095238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.095264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.095455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.095481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.095652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.095678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.095837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.095862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.096088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.096307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.096493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.096678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.096843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.096988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.097156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.097319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.097484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.097677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.097844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.097870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.098944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.098971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.099124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.099162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.099336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.099363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.099718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.099746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.099914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.099941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.100099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.100459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.100630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.100845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.100992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.101018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.101183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.101208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.101371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.101397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.101662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.101693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.101907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.101933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.102076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.102101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.102293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.102318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.102454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.102479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.102641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.102665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.102850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.102880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.103964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.103990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.104178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.104205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.104354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.104381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.104577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.104603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.104744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.104770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.104916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.104942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.105103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.105128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.105290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.105315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.105476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.105500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.105640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.105675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.105820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.105845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.106014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.106039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.106177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.106209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.106374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.106399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.106642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.106667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.106710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c600e0 (9): Bad file descriptor 00:24:48.286 [2024-07-15 17:47:43.106971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.107011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.107169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.107208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.107348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.107376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.107567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.107593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.107786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.107811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.107978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.108004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.108138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.108164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.108330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.108357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.108496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.108522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.286 [2024-07-15 17:47:43.108687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.286 [2024-07-15 17:47:43.108712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.286 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.108905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.108932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.109104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.109319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.109489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.109811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.109975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.110161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.110380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.110566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.110730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.110930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.110957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.111131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.111158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.111300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.111326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.111513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.111539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.111729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.111755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.111924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.111955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.112961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.112988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.113121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.113147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.113320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.113345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.113484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.113510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.113675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.113702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.113839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.113866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.114078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.114271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.114449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.114667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.114841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.114983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.115169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.115358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.115574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.115767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.115937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.116126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.116151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.116337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.116362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.116527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.116553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.116685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.116712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.116886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.116913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.117106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.117295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.117492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.117664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.117995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.118191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.118379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.118540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.118730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.118904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.118944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.119110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.119149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.119324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.119356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.119521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.119547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.119680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.119705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.119872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.119906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.120046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.120071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.120246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.120271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.120473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.120512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.120692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.120718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.120887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.120913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.121081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.121106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.121272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.121297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.121464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.121489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.121650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.121675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.121841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.121869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.122100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.122300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.122455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.122643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.122810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.122982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.123009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.123181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.123207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.287 [2024-07-15 17:47:43.123369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.287 [2024-07-15 17:47:43.123395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.287 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.123566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.123591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.123787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.123815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.123962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.123988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.124152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.124177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.124340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.124364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.124523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.124552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.124686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.124711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.124847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.124872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.125892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.125916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.126078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.126105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.126273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.126297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.126485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.126509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.126644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.126670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.126811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.126835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.127937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.127961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.128128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.128153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.128346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.128371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.128507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.128532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.128690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.128714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.128854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.128884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.129047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.129286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.129478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.129667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.129826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.129987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.130166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.130328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.130515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.130673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.130837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.130863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.131954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.131981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.132147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.132173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.132321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.132346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.132513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.132538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.132711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.132736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.132911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.132949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.133125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.133151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.133314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.133340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.133471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.133496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.133639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.133664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.133824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.133849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.134046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.134237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.134429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.134607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.134818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.134995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.135188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.135341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.135505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.135691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.135889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.135929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.136106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.136133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.136270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.136295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.136438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.136466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.136727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.136765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.136921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.137120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.137145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.137330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.137494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.137520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.137667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.137692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.137859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.137891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.138946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.138986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.139190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.139228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.139436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.139474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.139654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.139681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.139846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.139870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.140025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.140051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.140241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.140266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.140426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.288 [2024-07-15 17:47:43.140451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.288 qpair failed and we were unable to recover it. 00:24:48.288 [2024-07-15 17:47:43.140599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.140624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.140794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.140820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.140989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.141183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.141343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.141531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.141693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.141886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.141910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.142938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.142963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.143108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.143133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.143290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.143315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.143503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.143527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.143681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.143705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.143843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.143868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.144933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.144958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.145114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.145139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.145302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.145328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.145459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.145483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.145647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.145671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.145831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.145857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.146044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.146318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.146508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.146671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.146831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.146997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.147162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.147329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.147516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.147702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.147863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.147893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.148087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.148244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.148432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.148622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.148833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.148999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.149182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.149349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.149529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.149696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.149894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.149919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.150973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.150997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.151163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.151187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.151325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.151350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.151515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.151540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.151680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.151710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.151882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.151908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.152075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.152100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.152275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.152299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.152465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.152489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.152627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.152653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.152813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.152839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.153927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.153953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.154143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.154334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.154502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.154671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.154836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.154999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.155219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.155408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.155587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.155780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.155949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.289 [2024-07-15 17:47:43.155974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.289 qpair failed and we were unable to recover it. 00:24:48.289 [2024-07-15 17:47:43.156150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.156174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.156341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.156365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.156535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.156560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.156693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.156724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.156859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.156889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.157967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.157993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.158155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.158180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.158341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.158366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.158533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.158558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.158725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.158750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.158891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.158916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.159080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.159271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.159457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.159642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.159828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.159991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.160180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.160389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.160571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.160736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.160922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.160961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.161097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.161290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.161315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.161461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.161488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.161654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.161681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.161889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.161927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.162070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.162097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.162269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.162294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.162470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.162495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.162634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.162660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.162825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.162850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.163021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.163047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.163237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.163263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.163403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.163428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.163619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.163644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.163808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.163833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.164959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.164986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.165155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.165181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.165347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.165374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.165516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.165542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.165711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.165739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.165906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.165932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.166124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.166149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.166313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.166336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.166498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.166522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.166682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.166706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.166852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.166884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.167955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.167981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.168115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.168142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.168311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.168337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.168504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.168530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.168696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.168894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.168920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.169284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.169477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.169676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.169843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.169982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.170149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.170308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.170495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.170684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.170845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.170870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.171908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.171934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.290 [2024-07-15 17:47:43.172079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.290 [2024-07-15 17:47:43.172106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.290 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.172272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.172298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.172436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.172463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.172628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.172654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.172814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.172839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.173920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.173947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.174131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.174169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.174335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.174362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.174533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.174558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.174695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.174720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.174849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.174874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.175953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.175980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.176146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.176172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.176337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.176363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.176504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.176533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.176673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.176698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.176861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.176892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.177051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.177076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.177235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.177260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.177445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.177470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.177625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.177649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.177790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.177818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.178944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.178970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.179138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.179163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.179365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.179389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.179551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.179576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.179737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.179761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.179921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.179946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.180086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.180111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.180275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.180300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.180485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.180510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.180669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.180693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.180879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.180904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.181120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.181306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.181491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.181661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.181823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.181998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.182163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.182348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.182517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.182709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.182884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.182909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.183044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.183069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.183231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.183256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.183396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.183421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.183581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.291 [2024-07-15 17:47:43.183606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.291 qpair failed and we were unable to recover it. 00:24:48.291 [2024-07-15 17:47:43.183790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.183815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.183976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.184169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.184363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.184554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.184766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.184934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.184960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.185148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.185174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.185343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.185369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.185536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.185561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.185720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.185745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.185906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.185932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.186094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.186304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.186514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.186682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.186846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.186985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.187150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.187338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.187551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.187714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.187885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.187911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.188123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.188285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.188446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.188657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.188840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.188983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.189193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.189378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.189544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.189704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.189894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.189920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.190084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.190111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.190253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.190279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.190445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.190470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.190636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.190661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.190806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.190832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.191928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.191955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.192964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.192991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.193135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.193160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.193327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.193351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.193514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.193538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.193671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.193696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.193830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.193854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.194973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.194998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.195131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.195156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.195346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.195371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.195514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.195539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.195704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.195728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.195854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.195884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.196968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.197154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.197179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.197341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.197365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.197504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.197528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.197696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.197721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.197886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.197911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.198041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.198066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.198202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.198226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.198388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.198413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.198594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.198619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.198805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.198829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.199049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.199222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.199414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.199604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.199790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.199971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.200187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.200350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.200541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.200701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.292 qpair failed and we were unable to recover it. 00:24:48.292 [2024-07-15 17:47:43.200913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.292 [2024-07-15 17:47:43.200940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.201109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.201135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.201292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.201317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.201490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.201520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.201665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.201690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.201828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.201854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.202954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.202980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.203126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.203151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.203290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.203316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.203457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.203482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.203642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.203667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.203831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.203855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.204935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.204962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.205124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.205149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.205299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.205325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.205486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.205512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.205659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.205683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.205851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.205881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.206048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.206073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.206262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.206287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.206455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.206480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.206617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.206643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.206831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.206856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.207899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.207925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.208068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.208093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.208287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.208312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.208443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.208467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.208609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.208634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.208821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.208865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.209969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.209995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.210168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.210341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.210368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.210528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.210555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.210692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.210718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.210910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.210936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.211102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.211288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.211461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.211627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.211814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.211977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.212171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.212358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.212573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.212734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.212932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.212958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.213098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.213123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.213288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.213315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.213492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.213518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.213707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.213732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.213901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.213928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.214093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.214118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.214256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.214281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.214443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.214467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.214659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.214683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.214847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.214872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.293 [2024-07-15 17:47:43.215019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.293 [2024-07-15 17:47:43.215044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.293 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.215209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.215235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.215372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.215397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.215557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.215582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.215719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.215744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.215888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.215913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.216055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.216080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.216244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.216274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.216434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.216625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.216649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.216839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.217896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.217922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.218088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.218114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.218282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.218306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.218499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.218524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.218659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.218684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.218846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.218871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.219032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.219201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.219364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.219549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.219813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.219978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.220154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.220318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.220500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.220687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.220850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.221012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.221036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.221229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.221254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.221494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.221519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.221687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.221712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.221871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.221901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.222068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.222093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.222258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.222283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.222447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.222472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.222633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.222658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.222822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.222847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.223012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.223038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.223207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.223232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.223369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.223395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.223523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.223548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.223791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.223820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.224049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.224227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.224420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.224613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.224802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.224991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.225185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.225382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.225572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.225771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.225968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.225993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.226135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.226160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.226294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.226319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.226483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.226508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.226677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.226702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.226868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.226898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.227932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.227957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.228123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.228285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.228469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.228650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.228844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.228988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.229173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.229361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.229552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.229708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.229871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.229901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.230950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.230976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.231117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.231143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.231292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.294 [2024-07-15 17:47:43.231317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.294 qpair failed and we were unable to recover it. 00:24:48.294 [2024-07-15 17:47:43.231558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.231582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.231754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.231780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.231969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.231995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.232151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.232176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.232332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.232357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.232512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.232536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.232776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.232801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.232962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.232987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.233152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.233177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.233338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.233363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.233504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.233531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.233668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.233693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.233861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.233894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.234056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.234248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.234435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.234622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.234809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.234982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.235008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.235247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.235272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.235461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.235486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.235617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.235642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.235775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.235800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.235986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.236175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.236361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.236582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.236745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.236914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.236939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.237079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.237105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.237242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.237268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.237460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.237485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.237645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.237670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.237805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.237829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.238244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.238434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.238627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.238841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.238995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.239189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.239382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.239543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.239711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.239887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.239914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.240085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.240274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.240436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.240593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.240813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.240976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.241164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.241359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.241573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.241790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.241956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.241981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.242143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.242168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.242327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.242352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.242514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.242539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.242786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.242811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.242980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.243170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.243367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.243555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.243771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.243961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.243990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.244130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.244155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.244315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.244341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.244480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.244506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.244641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.244666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.244815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.244854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.245899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.245926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.246062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.246087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.246251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.246443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.246468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.246629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.246653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.246812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.246837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.247018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.295 [2024-07-15 17:47:43.247057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.295 qpair failed and we were unable to recover it. 00:24:48.295 [2024-07-15 17:47:43.247252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.247279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.247440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.247466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.247633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.247658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.247824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.248029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.248055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.248217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.248243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.248434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.248460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.248626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.248652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.248793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.248818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.249004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.249031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.249169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.249196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.249448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.249475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.249642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.249668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.249809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.249834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.250882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.251934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.251959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.252096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.252121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.252284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.252310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.252444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.252468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.252628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.252653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.252851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.252880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.253926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.253952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.254119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.254143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.254313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.254338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.254504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.254529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.254691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.254715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.254866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.254895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.255050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.255074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.255206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.255231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.255395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.255419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.255661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.255687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.255894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.255920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.256900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.256926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.257099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.257124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.257364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.257389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.257571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.257596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.257768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.257793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.257967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.257993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.258160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.258185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.258376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.258401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.258564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.258589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.258726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.258750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.258887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.259115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.259141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.259302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.259328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.259464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.259489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.259674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.259699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.259837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.259863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.260896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.260922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.261083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.261273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.261468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.261635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.261830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.261991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.262155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.262368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.262526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.262715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.262932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.262958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.263145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.263170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.263298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.263511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.263535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.263681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.263706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.296 [2024-07-15 17:47:43.263871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.296 [2024-07-15 17:47:43.263901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.296 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.264086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.264302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.264470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.264685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.264849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.264994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.265184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.265399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.265565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.265779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.265942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.265967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.266132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.266156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.266319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.266348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.266487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.266512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.266681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.266705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.266865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.266894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.267109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.267291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.267478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.267634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.267848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.267988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.268150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.268318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.268531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.268713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.268899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.268924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.269086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.269275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.269300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.269487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.269512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.269671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.269696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.269861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.269891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.270058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.270083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.270248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.270273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.270462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.270487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.270624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.270649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.270791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.270816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.271084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.271245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.271442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.271845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.271992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.272181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.272344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.272503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.272692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.272958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.272984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.273143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.273168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.273335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.273360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.273514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.273538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.273677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.273702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.273837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.273867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.274961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.274989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.275133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.275158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.275319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.275343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.275508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.275534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.275697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.275722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.275860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.275890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.276077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.276102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.276288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.276315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.276480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.276504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.276678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.276702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.276868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.276906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.277070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.277095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.277263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.277288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.277417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.277602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.277627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.277815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.277840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.278957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.278984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.279152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.279176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.279443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.279467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.279658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.279685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.279827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.279852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.280006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.280031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.280174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.280200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.280340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.280364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.297 [2024-07-15 17:47:43.280532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.297 [2024-07-15 17:47:43.280557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.297 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.280691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.280716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.280847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.280872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.281043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.281069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.281259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.281286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.281425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.281454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.281722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.281747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.281919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.281946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.282130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.282293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.282321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.282481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.282508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.282677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.282702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.282864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.282895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.283067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.283092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.283235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.283261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.283430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.283460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.283636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.283661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.283821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.283846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.284919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.284945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.285108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.285133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.285323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.285348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.285521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.285705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.285732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.285917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.285943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.286082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.286107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.286295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.286320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.286489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.286514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.286711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.286737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.286873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.286904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.287079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.287104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.287242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.287266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.287451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.287475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.287646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.287671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.287833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.287858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.288914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.288940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.289122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.289160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.289325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.289354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.289522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.289548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.289692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.289892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.290085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.290111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.290277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.290303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.290438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.290463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.290636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.291153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.291183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.291331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.291357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.291520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.291926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.291955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.292122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.292148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.292330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.292369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.292529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.292556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.292718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.292744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.292895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.292922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.293115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.293140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.293282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.293307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.293477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.293502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.293672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.293707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.293856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.293887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.294938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.294965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.295117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.295157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.295353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.295380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.295519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.295546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.295683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.295709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.295856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.295888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.296971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.296997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.297133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.297159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.297321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.297346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.297478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.297504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.297673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.297699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.297856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.297890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.298036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.298064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.298228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.298254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.298422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.298447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.298590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.298 [2024-07-15 17:47:43.298616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.298 qpair failed and we were unable to recover it. 00:24:48.298 [2024-07-15 17:47:43.298774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.298799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.298964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.298990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.299138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.299164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.299611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.299640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.299787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.299814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.299960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.299987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.300404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.300447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.300642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.300668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.300846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.300872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.301029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.301055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.301197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.301223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.301378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.301403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.301567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.301593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.302028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.302057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.302197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.302223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.302387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.302412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.302792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.302993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.303163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.303332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.303496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.303683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.303881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.303908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.304074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.304236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.304424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.304838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.304990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.305160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.305325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.305552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.305715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.305891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.305918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.306932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.306957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.307122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.307148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.307297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.307323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.307492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.307517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.307658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.307683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.307846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.307871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.308929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.308955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.309119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.309144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.309310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.309336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.309480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.309506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.309647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.309672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.309831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.309857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.310915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.310941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.311103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.311128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.311291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.311316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.311474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.311500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.311634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.311661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.311825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.311850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.312906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.312932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.313071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.313097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.313236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.313262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.299 [2024-07-15 17:47:43.313430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.299 [2024-07-15 17:47:43.313459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.299 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.313601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.313627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.313760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.313785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.313959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.313985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.314148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.314311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.314336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.314526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.314552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.314719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.314746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.314883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.314909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.315961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.315987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.316951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.316977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.317147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.317172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.317309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.317334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.317471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.317500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.317634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.317662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.317808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.317833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.318948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.318973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.319102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.319127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.319294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.319319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.319453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.319478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.319619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.319654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.319838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.319884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.320916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.320943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.321129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.321154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.321292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.321317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.321458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.321482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.321618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.321642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.321804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.321829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.322933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.322959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.323941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.323966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.324141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.324168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.324348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.324373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.324508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.324532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.324677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.324706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.324841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.324865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.325900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.325925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.326948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.326975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.327145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.327172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.327316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.327487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.327513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.327651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.327676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.327834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.327859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.300 [2024-07-15 17:47:43.328964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.300 [2024-07-15 17:47:43.328990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.300 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.329132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.329157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.329319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.329345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.329513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.329540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.329678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.329703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.329862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.329892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.330921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.330948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.331088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.331114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.331282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.331307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.331441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.331468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.331628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.331654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.331818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.331848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.332929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.332964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.333108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.333133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.333291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.333316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.333477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.333502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.333716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.333742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.333888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.333917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.334087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.334274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.334442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.334643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.334828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.334994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.335186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.335378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.335593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.335760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.335970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.335997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.336185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.336226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.336390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.336417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.336550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.336724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.336900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.336933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.337099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.337126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.337260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.337287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.337454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.337656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.337682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.337842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.337868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.338969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.338996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.339131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.339157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.339291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.339318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.339486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.339513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.339678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.339704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.339842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.339868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.340078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.340269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.340464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.340644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.340823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.340986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.341178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.341371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.341563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.341730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.341923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.341950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.342085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.342112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.342269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.342295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.342462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.342488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.342676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.342702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.342866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.342898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.343068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.343094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.343292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.343319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.343454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.343481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.343625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.343651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.301 qpair failed and we were unable to recover it. 00:24:48.301 [2024-07-15 17:47:43.343788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.301 [2024-07-15 17:47:43.343814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.343958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.343985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.344124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.344151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.344282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.344312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.344481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.344508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.344648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.344675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.344816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.344843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.345909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.346076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.346102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.346277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.346491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.346517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.346706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.346732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.346925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.346952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.347112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.347138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.347325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.347351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.347542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.347568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.347708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.347733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.347921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.347947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.348104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.348141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.348308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.348335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.348505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.348531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.348694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.348721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.348888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.348924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.349091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.349117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.349250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.349276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.349443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.349469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.349637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.349663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.349827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.349853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.350923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.350949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.351962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.352163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.352188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.352366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.352392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.352564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.352589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.352757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.352784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.352915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.352942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.353111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.353137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.353323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.353349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.353481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.353507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.353648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.353674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.353815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.353841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.354949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.354976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.355115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.355141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.355281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.355309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.355444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.355471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.355634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.355660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.355849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.355882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.356932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.356958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.357127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.357153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.357315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.357341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.357499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.357525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.357714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.357740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.357901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.357938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.358108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.358134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.358302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.358328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.358516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.358542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.358672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.358698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.358829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.358855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.359052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.359228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.359449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.359645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.359805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.359979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.360005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.360148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.360174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.360366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.360391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.360556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.302 [2024-07-15 17:47:43.360583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.302 qpair failed and we were unable to recover it. 00:24:48.302 [2024-07-15 17:47:43.360742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.360769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.360935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.360962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.361128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.361154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.361296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.361322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.361488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.361515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.361702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.361728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.361923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.361949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.362087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.362113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.362297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.362324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.362459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.362623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.362649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.362815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.362841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.363925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.363951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.364114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.364140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.364304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.364330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.364498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.364524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.364663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.364690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.364828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.364855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.365955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.365983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.366140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.366166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.366333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.366359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.366516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.366541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.366674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.366703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.366870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.366901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.367069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.367095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.367263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.367290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.367465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.367673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.367699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.367890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.367916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.368084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.368255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.368449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.368613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.368802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.368982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.369154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.369342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.369528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.369688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.369899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.369926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.370058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.370270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.370296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.370460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.370486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.370647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.370674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.370861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.370902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.371071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.371096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.371260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.371286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.371454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.371480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.371640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.371666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.371846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.371872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.372966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.372993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.373183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.373209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.373348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.373374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.303 [2024-07-15 17:47:43.373545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.303 [2024-07-15 17:47:43.373571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.303 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.373710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.373736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.373926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.373952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.374118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.374145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.374286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.374316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.374475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.374501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.374661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.374687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.374856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.374886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.375074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.375100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.375235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.375261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.375415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.375441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.375606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.375633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.375825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.375851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.376960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.376987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.377149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.377175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.377342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.377368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.377536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.377562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.377725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.377751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.377915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.377942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.378099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.378293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.378489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.378660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.378849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.378997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.379217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.379412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.379633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.379795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.379950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.379977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.380149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.380303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.380468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.380655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.380819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.380997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.381192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.381410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.381567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.381756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.381953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.381980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.382176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.382202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.382368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.382394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.382537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.382564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.382713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.382739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.382928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.382954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.383115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.383141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.304 [2024-07-15 17:47:43.383302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.304 [2024-07-15 17:47:43.383328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.304 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.383499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.383525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.383681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.383707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.383884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.383910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.384958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.384984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.385175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.385201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.385368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.385394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.385560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.385585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.385773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.385799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.385945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.385972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.386103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.386130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.386303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.386328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.386493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.386518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.386684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.386710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.386896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.386923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.387075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.387272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.387429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.387617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.387803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.387978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.388166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.388358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.388546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.388739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.388893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.388928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.389098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.389136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.389306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.389504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.389531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.389697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.389722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.389905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.389931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.390092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.390118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.390284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.390447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.390473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.390651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.390678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.390868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.390910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.305 qpair failed and we were unable to recover it. 00:24:48.305 [2024-07-15 17:47:43.391078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.305 [2024-07-15 17:47:43.391104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.391259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.391286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.391457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.391484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.391619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.391645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.391783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.391809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.391947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.391973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.392161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.392187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.392345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.392371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.392511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.392537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.392667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.392693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.392861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.392893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.393928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.393955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.394121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.394154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.394326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.394352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.394540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.394566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.394698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.394724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.394898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.394927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.395115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.395150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.395289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.395316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.395477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.395503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.395671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.395695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.395865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.395896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.396944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.396970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.397172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.397198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.397364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.397390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.397558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.397584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.397717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.397744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.397873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.397904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.398073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.398099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.398257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.398283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.306 [2024-07-15 17:47:43.398455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.306 [2024-07-15 17:47:43.398481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.306 qpair failed and we were unable to recover it. 00:24:48.307 [2024-07-15 17:47:43.398619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.307 [2024-07-15 17:47:43.398645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.307 qpair failed and we were unable to recover it. 00:24:48.307 [2024-07-15 17:47:43.398788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.307 [2024-07-15 17:47:43.398813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.307 qpair failed and we were unable to recover it. 00:24:48.307 [2024-07-15 17:47:43.399013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.307 [2024-07-15 17:47:43.399039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.307 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.399227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.399255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.399431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.399458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.399588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.399614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.399774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.399800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.399965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.399999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.400264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.400293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.400456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.400483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.400636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.400662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.400809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.400837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.400983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.401152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.401346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.401543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.401733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.586 [2024-07-15 17:47:43.401909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.586 [2024-07-15 17:47:43.401936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.586 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.402109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.402272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.402463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.402629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.402790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.402979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.403150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.403314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.403506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.403690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.403853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.403893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.404090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.404116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.404303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.404332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.404464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.404502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.404697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.404723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.404910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.404947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.405110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.405141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.405277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.405304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.405469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.405496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.405633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.405660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.405832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.405859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.406951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.406978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.407105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.407143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.407309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.407336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.407471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.407499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.407642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.407668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.407857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.407889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.408086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.408254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.408442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.408627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.408814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.408983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.409009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.409147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.409176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.409336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.409363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.587 qpair failed and we were unable to recover it. 00:24:48.587 [2024-07-15 17:47:43.409562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.587 [2024-07-15 17:47:43.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.409756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.409782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.409919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.409945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.410102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.410298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.410463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.410816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.410973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.411191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.411353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.411516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.411730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.411922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.411949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.412115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.412152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.412315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.412342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.412474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.412500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.412669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.412696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.412846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.412872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.413955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.413982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.414141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.414167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.414307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.414334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.414472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.414498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.414664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.414691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.414863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.414894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.415047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.415073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.415261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.415287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.415427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.415454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.415643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.415670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.415867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.415898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.416090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.416117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.416313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.416339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.416506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.416532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.416660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.416686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.416856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.416888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.417078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.588 [2024-07-15 17:47:43.417104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.588 qpair failed and we were unable to recover it. 00:24:48.588 [2024-07-15 17:47:43.417240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.417266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.417428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.417454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.417610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.417636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.417800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.417826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.418925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.418952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.419139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.419327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.419357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.419522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.419548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.419735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.419916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.419942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.420089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.420114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.420306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.420332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.420491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.420517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.420670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.420696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.420860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.420892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.421071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.421098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.421271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.421313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.421463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.421491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.421633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.421659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.421798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.421824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.422948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.422975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.423147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.423173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.423342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.423368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.423561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.423587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.423757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.423783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.423957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.423985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.424151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.424177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.424344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.424370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.424539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.424565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.424733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.424759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.424932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.589 [2024-07-15 17:47:43.424959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.589 qpair failed and we were unable to recover it. 00:24:48.589 [2024-07-15 17:47:43.425126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.425154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.425329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.425355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.425502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.425530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.425697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.425724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.425889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.425929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.426122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.426148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.426282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.426309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.426477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.426503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.426671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.426698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.426844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.426870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.427075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.427106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.427249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.427276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.427444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.427471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.427614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.427640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.427839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.427866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.428053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.428079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.428248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.428274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.428439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.428465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.428656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.428853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.428888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.429098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.429125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.429290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.429316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.429503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.429530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.429675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.429703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.429905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.429936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.430104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.430139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.430307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.430333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.430501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.430528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.430691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.430886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.430923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.431096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.431134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.431307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.431333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.431500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.431527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.431669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.590 [2024-07-15 17:47:43.431696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.590 qpair failed and we were unable to recover it. 00:24:48.590 [2024-07-15 17:47:43.431863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.431896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.432070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.432097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.432299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.432326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.432490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.432521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.432692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.432719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.432889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.432928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.433068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.433094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.433297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.433323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.433482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.433509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.433666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.433693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.433867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.433901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.434082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.434249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.434415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.434441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.434608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.434634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.434796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.434822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.435041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.435237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.435430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.435649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.435842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.435982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.436207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.436380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.436597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.436760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.436935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.436963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.437159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.437186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.437351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.437378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.437542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.437569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.437765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.437792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.437935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.437962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.438123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.438149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.438320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.438346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.438488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.438514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.438684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.438710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.438901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.438928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.439095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.439121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.439259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.439285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.439476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.439502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.591 qpair failed and we were unable to recover it. 00:24:48.591 [2024-07-15 17:47:43.439692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.591 [2024-07-15 17:47:43.439719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.439863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.439894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.440090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.440260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.440290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.440459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.440485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.440631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.440657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.440846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.440872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.441027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.441055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.441221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.441247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.441417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.441443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.441614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.441641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.441833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.441860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.442050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.442219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.442406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.442601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.442795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.442977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.443017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.443178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.443218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.443417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.443445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.443616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.443642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.443783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.443809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.444939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.444966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.445153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.445180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.445344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.445371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.445546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.445572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.445706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.445734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.445925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.445953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.446119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.446145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.446304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.446330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.446520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.446546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.446713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.446739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.446882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.446909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.447091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.447117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.447252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.447278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.447418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.447443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.592 qpair failed and we were unable to recover it. 00:24:48.592 [2024-07-15 17:47:43.447580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.592 [2024-07-15 17:47:43.447606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.447744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.447770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.447936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.447963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.448135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.448161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.448303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.448330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.448519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.448544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.448673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.448699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.448863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.448895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.449951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.449977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.450165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.450190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.450351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.450377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.450517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.450543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.450703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.450729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.450874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.450906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.451071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.451097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.451286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.451312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.451471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.451497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.451637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.451663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.451825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.451851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.452930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.452957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.453128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.453156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.453317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.453343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.453512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.453538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.453708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.453911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.453951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.454103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.454131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.454295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.454321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.454492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.454680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.454706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.454832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.454858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.455017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.455043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.593 [2024-07-15 17:47:43.455200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.593 [2024-07-15 17:47:43.455226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.593 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.455364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.455389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.455557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.455583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.455747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.455772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.455944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.455970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.456156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.456181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.456354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.456381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.456541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.456567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.456735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.456762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.456932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.456959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.457091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.457117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.457281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.457307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.457470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.457496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.457687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.457713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.457874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.457905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.458953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.458979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.459144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.459170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.459338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.459364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.459530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.459556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.459721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.459748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.459929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.459968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.460147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.460175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.460315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.460343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.460475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.460502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.460709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.460749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.460939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.460968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.461164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.461322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.461494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.461651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.461859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.461995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.462020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.462202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.462241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.462415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.462442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.462608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.462635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.462803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.462829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.594 [2024-07-15 17:47:43.462996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.594 [2024-07-15 17:47:43.463023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.594 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.463205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.463238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.463416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.463443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.463613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.463641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.463801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.463828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.464054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.464258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.464454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.464644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.464816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.464991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.465146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.465336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.465527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.465692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.465854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.465888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.466925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.466952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.467119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.467155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.467347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.467373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.467561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.467588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.467746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.467773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.467945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.467971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.468152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.468345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.468372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.468517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.468542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.468708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.468735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.468886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.468912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.469048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.469074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.469250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.469276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.469469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.469495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.469683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.595 [2024-07-15 17:47:43.469709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.595 qpair failed and we were unable to recover it. 00:24:48.595 [2024-07-15 17:47:43.469845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.469871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.470042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.470068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.470232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.470257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.470419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.470445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.470585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.470611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.470804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.470834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.471096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.471325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.471501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.471675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.471840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.471990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.472018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.472185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.472212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.472402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.472428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.472617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.472643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.472811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.472838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.473022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.473051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.473230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.473270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.473468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.473496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.473634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.473661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.473811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.473837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.474022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.474063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.474249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.474278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.474451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.474623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.474656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.474825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.474851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.475034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.475241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.475433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.475606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.596 [2024-07-15 17:47:43.475796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.596 qpair failed and we were unable to recover it. 00:24:48.596 [2024-07-15 17:47:43.475965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.476189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.476216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.476375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.476401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.476570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.476596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.476788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.476814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.477942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.477970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.478139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.478165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.478328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.478355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.478547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.478573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.478733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.478764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.478914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.478942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.479088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.479116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.479313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.479343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.479503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.479529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.479695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.479721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.479887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.479913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.480074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.480101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.480301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.480327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.480468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.480501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.480662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.480688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.480826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.480851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.481071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.481097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.481236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.481262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.481457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.481484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.481671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.481696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.481852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.481883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.482044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.482070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.482220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.482246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.482443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.482469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.482612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.482638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.482810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.482838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.483035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.483061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.483204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.483230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.597 qpair failed and we were unable to recover it. 00:24:48.597 [2024-07-15 17:47:43.483375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.597 [2024-07-15 17:47:43.483401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.483532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.483700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.483726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.483916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.483947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.484090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.484115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.484254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.484279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.484445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.484470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.484639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.484666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.484825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.484851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.485054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.485247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.485418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.485635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.485819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.485989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.486181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.486371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.486568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.486760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.486918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.486944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.487113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.487150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.487317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.487343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.487510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.487536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.487729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.487754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.487899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.487937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.488100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.488137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.488324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.488350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.488517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.488543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.488710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.488736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.488903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.488940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.489116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.489309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.489336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.489477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.489503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.489693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.489719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.489898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.489929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.490960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.490986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.598 [2024-07-15 17:47:43.491146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.598 [2024-07-15 17:47:43.491171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.598 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.491311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.491336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.491499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.491525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.491668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.491694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.491854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.491887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.492969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.492995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.493134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.493160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.493312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.493338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.493523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.493549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.493684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.493709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.493839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.493865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.494033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.494059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.494254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.494280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.494408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.494433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.494607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.494633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.494777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.494805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.495031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.495230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.495422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.495643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.495826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.495977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.496136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.496308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.496504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.496721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.496926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.496953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.497117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.497150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.497281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.497307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.497501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.497526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.497688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.497714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.599 qpair failed and we were unable to recover it. 00:24:48.599 [2024-07-15 17:47:43.497850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.599 [2024-07-15 17:47:43.497882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.498951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.498977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.499143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.499169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.499331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.499357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.499522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.499685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.499712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.499890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.499927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.500070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.500096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.500285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.500311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.500478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.500504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.500695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.500721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.500918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.500944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.501103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.501129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.501298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.501324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.501459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.501484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.501649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.501675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.501834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.501864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.502032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.502071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.502251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.502280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.502445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.502634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.502661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.502852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.502884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.503063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.503090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.503260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.503286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.503441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.503467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.503635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.503663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.503811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.503851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.504025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.504053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.504219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.504245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.504438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.504464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.504635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.504661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.504807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.504834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.505018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.505045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.505176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.505203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.505387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.505413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.505547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.505573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.600 qpair failed and we were unable to recover it. 00:24:48.600 [2024-07-15 17:47:43.505738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.600 [2024-07-15 17:47:43.505765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.505954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.505981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.506166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.506192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.506360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.506387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.506550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.506576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.506739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.506765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.506930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.506957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.507126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.507156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.507322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.507348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.507514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.507702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.507728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.507890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.507927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.508089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.508114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.508287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.508313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.508502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.508528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.508690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.508716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.508886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.508923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.509086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.509112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.509298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.509339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.509488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.509517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.509656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.509683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.509856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.509893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.510076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.510102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.510257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.510297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.510460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.510488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.510627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.510653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.510814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.510840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.511007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.511034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.511207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.511233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.511396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.511423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.511620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.511646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.511803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.511829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.512076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.512116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.512282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.512312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.512504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.512536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.512673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.512700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.512848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.512882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.513047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.513074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.513210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.513237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.513405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.513433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.513620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.513647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.601 qpair failed and we were unable to recover it. 00:24:48.601 [2024-07-15 17:47:43.513817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.601 [2024-07-15 17:47:43.513845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.514045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.514239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.514459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.514633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.514827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.514988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.515162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.515328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.515519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.515687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.515874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.515906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.516096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.516122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.516286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.516312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.516454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.516480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.516640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.516666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.516852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.516883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.517937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.517963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.518135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.518163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.518364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.518390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.518581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.518608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.518767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.518793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.518926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.518952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.519114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.519144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.519287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.519314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.519478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.519503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.519669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.519697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.519836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.519863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.520051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.520077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.520256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.520282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.520466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.520492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.520678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.520705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.520892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.520919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.521059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.521086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.521275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.521315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.521491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.521520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.602 qpair failed and we were unable to recover it. 00:24:48.602 [2024-07-15 17:47:43.521685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.602 [2024-07-15 17:47:43.521713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.521855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.521888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.522086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.522113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.522305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.522332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.522498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.522525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.522687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.522714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.522853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.522894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.523069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.523096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.523298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.523325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.523517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.523543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.523686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.523714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.523886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.523913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.524967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.524996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.525162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.525189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.525355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.525382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.525553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.525580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.525729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.525756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.525921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.525949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.526114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.526141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.526299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.526325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.526486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.526512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.526655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.526681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.526869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.526900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.527071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.527097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.527232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.527422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.527448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.527618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.527644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.527815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.527841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.528056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.528222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.528451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.528644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.528832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.528982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.529008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.603 [2024-07-15 17:47:43.529176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.603 [2024-07-15 17:47:43.529202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.603 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.529395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.529420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.529558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.529585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.529728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.529754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.529923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.529950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.530117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.530143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.530315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.530341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.530511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.530536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.530682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.530708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.530871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.530902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.531951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.531978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.532121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.532148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.532281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.532307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.532470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.532496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.532660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.532687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.532850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.532882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.533958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.533984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.534148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.534174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.534341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.534366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.534503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.534529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.534667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.534693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.534824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.534850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.535956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.535983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.536147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.604 [2024-07-15 17:47:43.536173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.604 qpair failed and we were unable to recover it. 00:24:48.604 [2024-07-15 17:47:43.536316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.536342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.536506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.536532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.536724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.536750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.536889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.536916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.537937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.537964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.538159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.538329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.538488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.538648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.538812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.538978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.539140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.539305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.539531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.539724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.539924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.539950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.540102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.540128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.540293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.540319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.540511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.540538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.540699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.540725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.540887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.540914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.541967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.541994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.542156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.542182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.542326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.542352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.542521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.542688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.542714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.542882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.542908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.543057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.543084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.543216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.543242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.543408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.543434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.543599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.605 [2024-07-15 17:47:43.543625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.605 qpair failed and we were unable to recover it. 00:24:48.605 [2024-07-15 17:47:43.543814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.543840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.544931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.544958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.545096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.545122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.545263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.545290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.545464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.545495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.545642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.545667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.545839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.545865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.546958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.546985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.547123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.547149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.547313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.547340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.547503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.547529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.547693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.547889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.547916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.548051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.548077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.548235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.548261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.548455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.548641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.548667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.548865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.548896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.549969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.549995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.550188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.550214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.550361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.550387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.550528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.550558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.550755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.550780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.550975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.551001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.551144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.551170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.551306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.606 [2024-07-15 17:47:43.551332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.606 qpair failed and we were unable to recover it. 00:24:48.606 [2024-07-15 17:47:43.551496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.551522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.551692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.551717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.551885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.551911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.552967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.552993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.553136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.553162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.553329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.553357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.553554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.553580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.553749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.553775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.553919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.553947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.554110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.554136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.554303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.554329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.554492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.554518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.554706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.554732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.554862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.554905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.555951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.555981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.556125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.556152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.556317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.556343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.556477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.556503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.556668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.556695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.556855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.556887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.557953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.557981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.558129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.558155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.558328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.558354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.558524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.558550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.558695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.558721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.558866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.607 [2024-07-15 17:47:43.558897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.607 qpair failed and we were unable to recover it. 00:24:48.607 [2024-07-15 17:47:43.559037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.559206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.559382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.559569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.559735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.559902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.559930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.560097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.560123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.560283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.560309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.560466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.560492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.560662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.560688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.560827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.561027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.561199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.561415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.561603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.561831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.561993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.562159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.562320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.562512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.562720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.562939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.562966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.563134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.563160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.563293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.563319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.563460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.563486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.563660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.563686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.563864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.564084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.564304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.564495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.564657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.564849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.564998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.565024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.565169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.565196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.565384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.565410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.565550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.608 [2024-07-15 17:47:43.565576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.608 qpair failed and we were unable to recover it. 00:24:48.608 [2024-07-15 17:47:43.565710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.565737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.565883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.565910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.566960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.566987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.567147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.567174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.567333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.567358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.567515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.567541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.567704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.567730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.567895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.567922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.568962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.568999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.569147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.569173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.569303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.569329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.569491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.569517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.569656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.569683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.569817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.569843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.570954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.570980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.571137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.571164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.571328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.571354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.571543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.571569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.571710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.571737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.571933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.571960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.572122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.572286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.572454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.572646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.572804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.572972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.609 [2024-07-15 17:47:43.573002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.609 qpair failed and we were unable to recover it. 00:24:48.609 [2024-07-15 17:47:43.573170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.573197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.573360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.573386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.573522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.573548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.573735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.573762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.573927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.573953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.574116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.574142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.574312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.574339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.574505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.574532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.574670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.574697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.574889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.574915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.575107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.575296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.575462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.575660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.575828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.575995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.576185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.576374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.576596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.576764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.576931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.577115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.577142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.577329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.577356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.577517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.577544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.577691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.577717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.577856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.577889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.578900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.578927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.579093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.579119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.579296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.579323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.579513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.579539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.579708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.579734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.579882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.579908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.580097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.580123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.580323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.580349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.580507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.580533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.580707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.610 [2024-07-15 17:47:43.580733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.610 qpair failed and we were unable to recover it. 00:24:48.610 [2024-07-15 17:47:43.580903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.580930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.581061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.581088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.581255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.581281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.581442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.581468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.581628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.581654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.581845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.581871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.582043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.582069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.582233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.582259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.582399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.582425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.582593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.582619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.582786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.582812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.583892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.583919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.584051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.584077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.584266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.584430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.584456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.584644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.584806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.584832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.585003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.585220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.585246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.585422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.585449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.585638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.585664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.585802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.585832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.586966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.586993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.587169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.587195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.587356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.587382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.587546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.587572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.587708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.587734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.587904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.587931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.588075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.588101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.588232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.588258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.588449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.588475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.611 [2024-07-15 17:47:43.588639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.611 [2024-07-15 17:47:43.588665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.611 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.588833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.588859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.589895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.589922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.590936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.590963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.591130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.591302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.591495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.591660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.591818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.591987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.592152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.592317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.592487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.592656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.592871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.592902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.593071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.593097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.593345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.593371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.593515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.593541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.593694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.593720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.593864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.593895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.594957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.594983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.595146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.612 [2024-07-15 17:47:43.595172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.612 qpair failed and we were unable to recover it. 00:24:48.612 [2024-07-15 17:47:43.595338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.595524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.595550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.595793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.595825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.596923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.596949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.597115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.597279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.597472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.597661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.597827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.597974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.598134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.598325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.598510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.598680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.598839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.598865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.599114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.599140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.599301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.599327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.599513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.599539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.599680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.599708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.599880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.599908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.600073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.600099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.600288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.600314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.600453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.600479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.600670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.600696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.600890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.600916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.601116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.601331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.601506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.601668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.601836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.601981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.602136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.602327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.602518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.602706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.602869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.613 [2024-07-15 17:47:43.602901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.613 qpair failed and we were unable to recover it. 00:24:48.613 [2024-07-15 17:47:43.603041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.603067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.603211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.603237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.603422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.603473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.603662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.603691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.603835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.603862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.604059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.604086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.604247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.604273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.604461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.604487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.604727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.604754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.604897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.604923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.605083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.605110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.605254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.605282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.605440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.605466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.605659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.605685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.605823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.605849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.606042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.606246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.606415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.606633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.606826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.606992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.607167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.607359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.607574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.607767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.607960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.607986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.608230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.608256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.608454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.608480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.608617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.608643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.608789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.608818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.609061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.609088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.609279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.609305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.609473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.609500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.609656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.609681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.609851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.609884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.610038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.610064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.610225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.610251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.610446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.610472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.610715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.610743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.610937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.610964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.611106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.614 [2024-07-15 17:47:43.611131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.614 qpair failed and we were unable to recover it. 00:24:48.614 [2024-07-15 17:47:43.611320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.611346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.611532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.611557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.611697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.611723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.611906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.611933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.612077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.612109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.612270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.612296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.612462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.612489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.612632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.612659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.612822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.612848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.613919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.613945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.614134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.614164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.614327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.614353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.614500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.614527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.614690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.614716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.614884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.614911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.615080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.615106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.615295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.615321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.615502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.615528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.615692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.615718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.615886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.615912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.616080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.616105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.616289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.616315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.616479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.616506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.616653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.616679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.616850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.616882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.617073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.617237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.617400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.617601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.617816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.617991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.618177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.618366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.618572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.618754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.618941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.615 [2024-07-15 17:47:43.618968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.615 qpair failed and we were unable to recover it. 00:24:48.615 [2024-07-15 17:47:43.619135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.619172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.619338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.619367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.619531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.619558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.619752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.619777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.619922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.619949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.620943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.620969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.621174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.621200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.621364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.621389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.621538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.621564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.621694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.621720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.621866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.621897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.622058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.622084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.622246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.622272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.622435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.622461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.622628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.622654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.622814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.622841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.623962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.623989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.624149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.624175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.624311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.624337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.624529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.624555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.624750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.624776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.624969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.624996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.625169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.625195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.625357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.625383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.625560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.625586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.625752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.625778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.616 [2024-07-15 17:47:43.625954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.616 [2024-07-15 17:47:43.625980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.616 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.626143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.626169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.626308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.626333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.626465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.626490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.626616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.626641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.626808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.626834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.627007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.627034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.627221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.627247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.627413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.627438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.627681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.627707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.627907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.627944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.628134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.628164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.628296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.628321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.628492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.628517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.628685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.628711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.628963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.628990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.629185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.629211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.629376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.629402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.629576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.629766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.629792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.629988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.630184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.630398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.630582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.630769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.630942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.630968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.631131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.631156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.631319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.631345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.631489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.631515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.631655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.631681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.631847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.631873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.632903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.632929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.633096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.633122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.633284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.633310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.633500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.633525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.633691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.633716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.633887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.617 [2024-07-15 17:47:43.633913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.617 qpair failed and we were unable to recover it. 00:24:48.617 [2024-07-15 17:47:43.634081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.634108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.634296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.634321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.634512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.634538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.634706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.634732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.634867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.634911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.635086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.635113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.635279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.635312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.635500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.635526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.635668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.635698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.635891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.635926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.636169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.636203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.636448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.636474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.636661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.636686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.636824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.636850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.637018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.637045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.637211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.637236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.637405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.637431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.637671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.637697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.637863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.637898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.638072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.638098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.638246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.638272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.638467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.638493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.638663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.638690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.638942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.638968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.639133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.639159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.639326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.639352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.639516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.639542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.639706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.639731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.639872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.639904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.640968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.640994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.641162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.641188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.641347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.641373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.641615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.641640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.641813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.641839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.642036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.642062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.642227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.642252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.618 qpair failed and we were unable to recover it. 00:24:48.618 [2024-07-15 17:47:43.642411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.618 [2024-07-15 17:47:43.642437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.642602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.642628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.642798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.642824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.642989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.643016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.643182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.643209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.643406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.643432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.643597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.643622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.643813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.643839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.644033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.644245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.644456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.644625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.644839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.644983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.645178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.645397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.645565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.645734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.645935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.645962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.646127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.646155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.646346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.646372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.646536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.646563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.646708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.646734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.646901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.646927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.647115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.647141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.647300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.647326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.647498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.647523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.647685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.647711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.647898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.647924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.648085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.648111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.648249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.648275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.648440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.648467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.648638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.648664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.648824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.648850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.649970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.649996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.650140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.650166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.619 [2024-07-15 17:47:43.650303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.619 [2024-07-15 17:47:43.650329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.619 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.650517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.650543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.650709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.650735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.650873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.650903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.651067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.651096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.651253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.651278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.651419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.651445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.651610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.651635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.651828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.651854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.652028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.652054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.652249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.652275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.652431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.652458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.652624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.652650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.652814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.652840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.653023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.653049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.653226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.653252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.653416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.653442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.653587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.653613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.653815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.653841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.654054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.654238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.654433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.654607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.654823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.654985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.655154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.655344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.655510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.655720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.655930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.655956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.656086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.656111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.656305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.656334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.656522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.656546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.656711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.656737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.656903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.657085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.657110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.657271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.657296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.657471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.657495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.657666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.657691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.657854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.657886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.658087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.658112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.658249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.620 [2024-07-15 17:47:43.658274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.620 qpair failed and we were unable to recover it. 00:24:48.620 [2024-07-15 17:47:43.658437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.658462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.658600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.658626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.658795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.658820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.659018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.659043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.659222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.659247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.659443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.659468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.659643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.659668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.659833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.659858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.660057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.660251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.660468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.660682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.660975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.661165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.661354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.661524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.661685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.661883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.661908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.662074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.662239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.662453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.662670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.662832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.662997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.663163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.663357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.663524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.663711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.663865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.663895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.664925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.664951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.621 [2024-07-15 17:47:43.665087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.621 [2024-07-15 17:47:43.665113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.621 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.665281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.665306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.665465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.665490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.665633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.665657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.665814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.665839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.666903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.667077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.667101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.667278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.667464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.667489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.667645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.667669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.667838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.667862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.668907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.668936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.669101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.669126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.669289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.669472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.669497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.669689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.669713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.669852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.669883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.670076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.670101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.670230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.670255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.670415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.670440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.670628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.670653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.670815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.670840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.671046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.671072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.671257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.671282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.671461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.671486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.671681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.671706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.671868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.671899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.672095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.672120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.672260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.672285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.672485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.672510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.672674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.672861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.622 [2024-07-15 17:47:43.672893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.622 qpair failed and we were unable to recover it. 00:24:48.622 [2024-07-15 17:47:43.673059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.673246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.673412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.673578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.673791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.673958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.673985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.674148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.674179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.674375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.674400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.674542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.674567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.674732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.674757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.674920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.674945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.675112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.675136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.675296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.675321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.675485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.675510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.675671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.675696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.675862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.675892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.676110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.676298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.676451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.676626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.676835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.676975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.677001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.677165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.677191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.677388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.677574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.677599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.677792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.677817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.677979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.678143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.678299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.678490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.678691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.678880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.678906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.679040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.679065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.679253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.679283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.679447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.679474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.679638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.679663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.679850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.679880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.680047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.680072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.680217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.680242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.680418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.680443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.680651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.623 [2024-07-15 17:47:43.680676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.623 qpair failed and we were unable to recover it. 00:24:48.623 [2024-07-15 17:47:43.680841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.680866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.681054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.681080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.681257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.681282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.681444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.681468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.681628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.681653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.681810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.681834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.682934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.682959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.683098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.683123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.683299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.683324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.683462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.683487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.683653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.683677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.683810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.683836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.684943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.684968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.685137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.685323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.685486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.685676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.685834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.685980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.686172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.686329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.686509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.686728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.686906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.686932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.687099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.687125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.687268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.687293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.687456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.687480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.687648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.687673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.687855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.687885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.688048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.688073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.688242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.688267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.688410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.688435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.688599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.688624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.624 [2024-07-15 17:47:43.688758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.624 [2024-07-15 17:47:43.688783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.624 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.688947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.688972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.689160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.689185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.689354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.689379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.689553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.689578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.689738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.689763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.689928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.689954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.690099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.690123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.690284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.690310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.690445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.690470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.690631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.690657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.690820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.690845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.691891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.691921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.692299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.692453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.692634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.692816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.692992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.693154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.693322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.693504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.693686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.693896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.693922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.694917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.694943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.695104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.695128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.695270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.695295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.695484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.695509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.695669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.695694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.695831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.625 [2024-07-15 17:47:43.695856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.625 qpair failed and we were unable to recover it. 00:24:48.625 [2024-07-15 17:47:43.696045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.696070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.696221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.696246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.696433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.696457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.696589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.696771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.696800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.696986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.697152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.697338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.697525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.697717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.697911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.697936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.698100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.698293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.698459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.698647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.698839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.698988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.699179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.699345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.699557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.699741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.626 [2024-07-15 17:47:43.699937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.626 [2024-07-15 17:47:43.699963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.626 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.700122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.700149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.700289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.700315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.700502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.700528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.700672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.700697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.700873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.700903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.701937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.701963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.702120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.702322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.702347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.702508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.702533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.702676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.702701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.702867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.702899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.703032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.703057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.908 qpair failed and we were unable to recover it. 00:24:48.908 [2024-07-15 17:47:43.703201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.908 [2024-07-15 17:47:43.703226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.703391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.703416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.703604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.703629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.703817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.703841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.703992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.704180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.704343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.704504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.704687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.704901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.704927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.705065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.705090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.705257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.705282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.705463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.705487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.705662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.705687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.705823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.705847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.706076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.706262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.706448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.706635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.706824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.706984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.707147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.707310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.707495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.707710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.707867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.707899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.708063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.708088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.708329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.708354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.708488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.708513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.708648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.708674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.708871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.708913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.709076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.709100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.709234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.709259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.709421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.909 [2024-07-15 17:47:43.709450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.909 qpair failed and we were unable to recover it. 00:24:48.909 [2024-07-15 17:47:43.709611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.709636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.709798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.709823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.709982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.710166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.710358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.710549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.710738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.710912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.710938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.711097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.711122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.711287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.711312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.711472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.711497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.711693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.711718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.711857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.712057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.712258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.712446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.712641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.712826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.712994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.713183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.713334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.713521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.713733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.713911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.713937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.714124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.714315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.714482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.714642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.910 [2024-07-15 17:47:43.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.910 qpair failed and we were unable to recover it. 00:24:48.910 [2024-07-15 17:47:43.714989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.715173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.715366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.715515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.715701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.715913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.715939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.716101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.716125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.716366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.716391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.716554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.716580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.716745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.716770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.716938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.716964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.717135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.717161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.717286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.717311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.717477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.717503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.718337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.718367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.718540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.718564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.719438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.719469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.719663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.719689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.720371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.720401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.720590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.720616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.720803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.720829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.721019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.721045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.721231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.721256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.721394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.721420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.721608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.721638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.721827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.721852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.722025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.722219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.911 [2024-07-15 17:47:43.722244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.911 qpair failed and we were unable to recover it. 00:24:48.911 [2024-07-15 17:47:43.722408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.722433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.722590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.722615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.722783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.722960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.722985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.723176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.723201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.723338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.723362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.723504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.723530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.723699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.723725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.723891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.723916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.724106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.724131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.724306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.724333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.724524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.724549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.724711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.724737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.724928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.724954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.725119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.725144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.725317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.725342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.725484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.725674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.725699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.725887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.725912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.726098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.726123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.726299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.726324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.726488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.726513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.726653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.726678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.726855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.726884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.727051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.727076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.727218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.727243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.727407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.727432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.727560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.727585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.912 qpair failed and we were unable to recover it. 00:24:48.912 [2024-07-15 17:47:43.727745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.912 [2024-07-15 17:47:43.727771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.727929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.727955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.728099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.728124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.728289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.728315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.728458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.728482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.728640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.728666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.728825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.728850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.729928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.729954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.730117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.730141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.730328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.730353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.730543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.730568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.730746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.730771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.730957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.730983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.731148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.731172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.731334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.731359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.731529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.731553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.731699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.731725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.731889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.731914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.732109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.732134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.732275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.732300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.732472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.732497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.732634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.732661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.732859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.732904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.733044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.733069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.733235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.733260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.733396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.733423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.733612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.913 [2024-07-15 17:47:43.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.913 qpair failed and we were unable to recover it. 00:24:48.913 [2024-07-15 17:47:43.733805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.733830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.734017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.734043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.734208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.734233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.734393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.734419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.734552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.734581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.734752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.734776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.735937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.735963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.736128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.736300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.736481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.736670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.736831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.736995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.737161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.737362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.737566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.737731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.737922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.737948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.738117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.738142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.738330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.738354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.738511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.738536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.738778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.738803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.914 [2024-07-15 17:47:43.738993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.914 [2024-07-15 17:47:43.739018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.914 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.739259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.739284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.739450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.739474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.739663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.739688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.739852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.739887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.740052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.740077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.740242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.740267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.740407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.740432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.740594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.740619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.740782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.740807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.741932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.741957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.742117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.742308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.742475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.742633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.742826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.742991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.743179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.743349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.743542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.743730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.743944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.743969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.744133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.744157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.744323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.744348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.744487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.744513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.744680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.744705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.744866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.744900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.745066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.745090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.745253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.745444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.745469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.745711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.745736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.745900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.745925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.746090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.746115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.746275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.746300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.746459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.746483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.746624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.915 [2024-07-15 17:47:43.746649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.915 qpair failed and we were unable to recover it. 00:24:48.915 [2024-07-15 17:47:43.746838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.746863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.747053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.747238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.747430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.747650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.747818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.747983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.748168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.748382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.748549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.748739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.748900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.748925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.749113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.749139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.749303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.749328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.749494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.749519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.749676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.749700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.749892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.749918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.750103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.750128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.750319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.750344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.750533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.750557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.750713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.750738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.750870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.750899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.751048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.751074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.751220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.751246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.751488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.751513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.751704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.751728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.751890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.751917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.752972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.752997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.753188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.753214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.753455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.753480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.753623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.753647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.753833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.753858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.754031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.754057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.754193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.754218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.754405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.916 [2024-07-15 17:47:43.754430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.916 qpair failed and we were unable to recover it. 00:24:48.916 [2024-07-15 17:47:43.754573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.754598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.754742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.754767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.754935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.754961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.755098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.755123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.755291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.755316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.755454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.755479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.755642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.755667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.755832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.755858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.756008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.756033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.756166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.756191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.756435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.756460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.756648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.756673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.756808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.756833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.757002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.757027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.757216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.757241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.757483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.757508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.757669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.757694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.757886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.757912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.758073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.758103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.758271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.758295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.758463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.758489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.758651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.758675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.758841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.758866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.759922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.759955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.760122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.760147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.760335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.760360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.760602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.760627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.760795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.760820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.760980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.761007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.761204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.761229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.761400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.761424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.761560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.761585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.761751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.761776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.762018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.762044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.762191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.762216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.917 [2024-07-15 17:47:43.762406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-07-15 17:47:43.762431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.917 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.762569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.762594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.762837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.762861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.763037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.763062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.763229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.763253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.763440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.763468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.763640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.763665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.763796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.763821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.764940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.764965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.765108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.765133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.765302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.765327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2337926 Killed "${NVMF_APP[@]}" "$@" 00:24:48.918 [2024-07-15 17:47:43.765460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.765484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.765655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.765679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:48.918 [2024-07-15 17:47:43.765820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.765845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:48.918 [2024-07-15 17:47:43.766015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.766039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.918 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:48.918 [2024-07-15 17:47:43.766283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.766308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.918 [2024-07-15 17:47:43.766449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.766473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.766620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.766645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.918 [2024-07-15 17:47:43.766803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.918 [2024-07-15 17:47:43.766828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.918 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.767886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.767911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.768077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.768103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.768303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.768328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.768515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.768541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.768733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.768760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.768902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.768929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.769962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.769987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.770156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.770181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.770346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.770370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.770545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.770570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2338607 00:24:48.919 [2024-07-15 17:47:43.770727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.770752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2338607 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:48.919 [2024-07-15 17:47:43.770922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.770948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2338607 ']' 00:24:48.919 [2024-07-15 17:47:43.771088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.771113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.919 [2024-07-15 17:47:43.771300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.771326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.919 [2024-07-15 17:47:43.771490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.919 [2024-07-15 17:47:43.771516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 witWaiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.919 h addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.919 [2024-07-15 17:47:43.771662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.771688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 17:47:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:48.919 [2024-07-15 17:47:43.771827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.771853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.771997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.772189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.772356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.772543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.772715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.772887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.772923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.773063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.773087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.773278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.773304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.919 qpair failed and we were unable to recover it. 00:24:48.919 [2024-07-15 17:47:43.773464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.919 [2024-07-15 17:47:43.773490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.773651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.773676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.773818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.773843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.774018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.774045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.774185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.774210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.774412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.774438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.774609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.774634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.774830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.774860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.775024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.775051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.775194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.775219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.775368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.775393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.775588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.775614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.775808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.775850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.776055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.776248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.776441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.776635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.776804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.776978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.777192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.777369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.777554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.777739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.777967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.777994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.778128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.778154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.778302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.778328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.778464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.778490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.778654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.778680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.778841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.778867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.779918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.779949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.780109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.780135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.780292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.780318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.780518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.780543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.780685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.780711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.780849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.780880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.920 qpair failed and we were unable to recover it. 00:24:48.920 [2024-07-15 17:47:43.781022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.920 [2024-07-15 17:47:43.781048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.781208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.781234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.781413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.781439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.781625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.781651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.781902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.781938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.782114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.782140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.782342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.782367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.782543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.782569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.782711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.782737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.782910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.782936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.783965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.783993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.784159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.784184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.784380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.784405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.784572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.784598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.784767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.784793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.784963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.784990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.785172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.785202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.785369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.785394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.785525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.785708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.785734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.785881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.785907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.786931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.786956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.787097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.787256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.787281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.787440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.787466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.787633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.787659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.787828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.787853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.788048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.788091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.788277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.788317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.788468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.788496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.921 [2024-07-15 17:47:43.788662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.921 [2024-07-15 17:47:43.788688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.921 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.788823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.788849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.788997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.789025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.789207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.789233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.789401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.789428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.789618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.789645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.789839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.789866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.790942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.790970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.791134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.791161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.791305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.791332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.791493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.791518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.791688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.791715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.791902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.791929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.792097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.792123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.792259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.792285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.792435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.792462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.792671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.792699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.792846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.792888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.793936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.793963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.794166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.794192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.794328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.794354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.794510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.794536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.794707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.794734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.794905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.794933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.795096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.795121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.795293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.795319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.795484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.795510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.795710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.795735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.795873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.795903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.796038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.796064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.922 [2024-07-15 17:47:43.796236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.922 [2024-07-15 17:47:43.796262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.922 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.796433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.796473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.796662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.796689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.796870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.796906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.797046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.797073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.797261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.797288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.797449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.797476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.797647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.797673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.797842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.797868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.798045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.798085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.798272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.798299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.798486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.798512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.798687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.798714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.798901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.798950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.799110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.799151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.799314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.799341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.799483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.799511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.799688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.799716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.799873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.799909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.800079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.800273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.800464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.800662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.800995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.801022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.801214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.801240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.801403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.801428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.801590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.801616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.801788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.801813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.802040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.802266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.802429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.802604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.923 [2024-07-15 17:47:43.802798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.923 qpair failed and we were unable to recover it. 00:24:48.923 [2024-07-15 17:47:43.802941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.802968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.803134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.803168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.803338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.803365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.803608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.803634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.803802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.803828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.803965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.803993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.804132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.804157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.804347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.804373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.804539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.804564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.804708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.804734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.804886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.804913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.805051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.805077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.805237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.805262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.805506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.805531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.805673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.805698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.805908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.805948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.806097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.806125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.806289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.806315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.806498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.806524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.806688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.806714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.806849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.806882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.807915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.807942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.808087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.808112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.808282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.808313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.808482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.808508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.808649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.808676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.808838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.808864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.809932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.809959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.810103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.810129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.810296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.810322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.924 qpair failed and we were unable to recover it. 00:24:48.924 [2024-07-15 17:47:43.810486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.924 [2024-07-15 17:47:43.810512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.810654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.810682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.810829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.810855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.810997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.811180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.811346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.811525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.811720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.811888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.811915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.812095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.812288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.812448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.812612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.812803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.812989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.813177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.813368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.813526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.813692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.813896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.813924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.814067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.814094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.814261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.814287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.814452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.814478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.814629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.814669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.814868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.814907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.815894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.815920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.816055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.816080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.816267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.816293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.816451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.816477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.816640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.816665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.816801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.816826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.817004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.817030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.817197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.817223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.817393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.817418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.817583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.817608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.925 [2024-07-15 17:47:43.817744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.925 [2024-07-15 17:47:43.817770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.925 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.817916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.817953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.818159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.818200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 [2024-07-15 17:47:43.818182] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.818272] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.926 [2024-07-15 17:47:43.818354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.818382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.818532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.818557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.818705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.818731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.818903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.818930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.819092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.819119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.819249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.819276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.819417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.819612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.819638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.819784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.819813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.820951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.820980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.821147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.821174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.821368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.821395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.821566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.821592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.821785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.821811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.821993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.822020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.822223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.822249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.822414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.822440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.822604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.822631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.822805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.822831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.823042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.823246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.823432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.823606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.823830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.823991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.824032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.824204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.824231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.824401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.824427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.824692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.824716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.824934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.824961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.825106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.825132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.825351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.825600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.926 [2024-07-15 17:47:43.825626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.926 qpair failed and we were unable to recover it. 00:24:48.926 [2024-07-15 17:47:43.825791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.825817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.826028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.826069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.826239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.826268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.826434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.826462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.826639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.826665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.826807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.826833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.827030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.827070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.827248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.827275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.827441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.827467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.827631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.827657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.827797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.827823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.828003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.828030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.828176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.828204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.828431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.828456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.828643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.828674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.828839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.828865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.829970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.829997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.830165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.830193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.830379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.830404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.830568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.830594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.830779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.830805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.831026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.831229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.831430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.831648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.831809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.831989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.832029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.832227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.832254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.832422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.832448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.832641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.832667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.927 qpair failed and we were unable to recover it. 00:24:48.927 [2024-07-15 17:47:43.832801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.927 [2024-07-15 17:47:43.832829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.833258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.833475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.833668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.833835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.833995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.834217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.834410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.834571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.834742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.834960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.834989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.835125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.835153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.835322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.835349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.835494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.835522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.835715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.835740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.835895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.835936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.836124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.836163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.836363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.836391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.836562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.836588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.836784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.836955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.836982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.837202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.837370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.837397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.837592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.837618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.837781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.837808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.837986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.838154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.838355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.838571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.838732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.838924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.838965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.839133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.839161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.839346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.839386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.839557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.839584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.839730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.839756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.839925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.839953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.840121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.840147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.840312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.840338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.928 qpair failed and we were unable to recover it. 00:24:48.928 [2024-07-15 17:47:43.840479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.928 [2024-07-15 17:47:43.840504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.840670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.840696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.840837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.841034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.841194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.841379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.841562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.841756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.841970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.842010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.842173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.842221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.842363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.842391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.842532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.842559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.842751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.842778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.842984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.843225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.843390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.843557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.843723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.843964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.843991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.844126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.844152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.844318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.844344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.844541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.844567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.844712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.844738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.844886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.844913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.845081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.845108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.845298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.845324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.845509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.845536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.845715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.845740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.845941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.845968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.846112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.846138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.846345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.846371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.846549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.846575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.846751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.846777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.846919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.846956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.847153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.847179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.847352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.847378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.847516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.847542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.847720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.847745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.847920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.847946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.848105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.848131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.848300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.929 [2024-07-15 17:47:43.848327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.929 qpair failed and we were unable to recover it. 00:24:48.929 [2024-07-15 17:47:43.848493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.848518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.848675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.848701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.848852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.848899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.849103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.849131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.849300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.849328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.849497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.849524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.849702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.849742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.849955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.849983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.850123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.850149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.850292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.850319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.850490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.850517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.850662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.850688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.850834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.850874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.851069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.851270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.851436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.851631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.851826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.851978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.852197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.852383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.852583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.930 [2024-07-15 17:47:43.852755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.852954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.852980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.853173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.853199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.853374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.853400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.853575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.853601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.853742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.853771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.853944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.853971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.854113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.854139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.854306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.854332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.854509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.854535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.854671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.854696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.854866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.854899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.855072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.855098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.855243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.855269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.855444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.855470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.855638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.855663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.855805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.855831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.856081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.856107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.930 qpair failed and we were unable to recover it. 00:24:48.930 [2024-07-15 17:47:43.856288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.930 [2024-07-15 17:47:43.856313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.856481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.856508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.856674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.856700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.856873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.856903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.857047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.857074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.857271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.857296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.857488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.857513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.857658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.857684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.857824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.857851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.858949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.858975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.859120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.859146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.859321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.859347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.859476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.859501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.859667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.859693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.859834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.859871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.860959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.860985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.861152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.861178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.861324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.861349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.861512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.861538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.861679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.861705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.861833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.861858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.862064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.862089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.862268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.862293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.862434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.862460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.862653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.862679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.862869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.862901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.863069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.863094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.863281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.863307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.863458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.931 [2024-07-15 17:47:43.863498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.931 qpair failed and we were unable to recover it. 00:24:48.931 [2024-07-15 17:47:43.863674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.863702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.863861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.863894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.864089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.864115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.864304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.864330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.864492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.864518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.864697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.864724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.864866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.864905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.865961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.865988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.866148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.866177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.866341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.866367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.866524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.866550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.866687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.866712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.866850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.866884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.867078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.867105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.867278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.867304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.867472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.867499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.867691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.867723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.867856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.867889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.868051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.868077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.868248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.868274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.868441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.868467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.868608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.868636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.868806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.868832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.869952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.869980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.870151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.870176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.870323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.932 [2024-07-15 17:47:43.870349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.932 qpair failed and we were unable to recover it. 00:24:48.932 [2024-07-15 17:47:43.870515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.870541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.870679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.870705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.870881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.870908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.871069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.871289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.871480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.871642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.871814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.871998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.872038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.872208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.872236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.872411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.872438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.872609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.872636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.872814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.872840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.872993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.873164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.873356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.873560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.873748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.873920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.873949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.874143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.874170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.874337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.874365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.874536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.874563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.874729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.874755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.874946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.874972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.875108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.875134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.875308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.875339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.875533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.875560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.875730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.875757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.875897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.875924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.876117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.876146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.876309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.876335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.876474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.876500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.876659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.876685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.876823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.876850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.877912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.933 [2024-07-15 17:47:43.877940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.933 qpair failed and we were unable to recover it. 00:24:48.933 [2024-07-15 17:47:43.878135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.878161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.878303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.878331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.878509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.878535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.878728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.878754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.878920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.879113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.879139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.879304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.879330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.879494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.879521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.879683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.879710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.879845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.879872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.880915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.880942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.881135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.881323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.881483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.881670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.881859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.881999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.882192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.882351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.882546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.882763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.882953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.883105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.883131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.883278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.883304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.883458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.883484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.883635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.883661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.883850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.883882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.884043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.884068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.884224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.884249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.884413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.884440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.884604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.884630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.884822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.884848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.885019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.885045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.885189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.885216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.885386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.885412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.934 [2024-07-15 17:47:43.885596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.934 [2024-07-15 17:47:43.885622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.934 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.885783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.885808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.885977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.886142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.886333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.886488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.886686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.886873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.886916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.887071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.935 [2024-07-15 17:47:43.887103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.887128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.887301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.887328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.887478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.887504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.887664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.887689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.887896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.887922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.888182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.888208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.888372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.888399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.888545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.888570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.888712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.888739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.888989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.889184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.889375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.889565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.889728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.889893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.889919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.890086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.890112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.890301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.890327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.890470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.890496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.890625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.890651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.890816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.890842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.891024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.891050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.891213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.891239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.891414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.891440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.891630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.891656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.891827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.891853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.892036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.892076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.892245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.892273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.892422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.892449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.892639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.892665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.892805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.892832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.893011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.893044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.893216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.893243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.935 [2024-07-15 17:47:43.893439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.935 [2024-07-15 17:47:43.893466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.935 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.893660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.893686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.893858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.894086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.894284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.894453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.894641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.894810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.894973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.895013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.895211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.895410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.895438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.895630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.895835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.895863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.896022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.896050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.896219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.896246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.896413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.896440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.896631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.896657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.896793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.896820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.897067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.897095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.897295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.897493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.897521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.897718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.897744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.897912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.897939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.898106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.898326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.898353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.898533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.898559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.898726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.898752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.898921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.898947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.899090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.899118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.899269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.899296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.899438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.899608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.899634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.899799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.899826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.900036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.900063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.900206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.900233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.900404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.936 [2024-07-15 17:47:43.900431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.936 qpair failed and we were unable to recover it. 00:24:48.936 [2024-07-15 17:47:43.900624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.900651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.900796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.900966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.901175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.901392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.901586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.901749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.901940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.901967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.902136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.902162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.902331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.902358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.902497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.902524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.902687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.902716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.902885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.902913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.903052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.903079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.903279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.903305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.903470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.903497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.903702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.903730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.903918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.903946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.904112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.904139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.904280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.904307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.904475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.904502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.904661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.904688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.904822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.904851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.905065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.905240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.905453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.905626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.905828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.905982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.906010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.906178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.906206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.906398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.906426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.906565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.906593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.906786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.906814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.907949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.907977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.908143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.908169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.937 qpair failed and we were unable to recover it. 00:24:48.937 [2024-07-15 17:47:43.908334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.937 [2024-07-15 17:47:43.908360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.908523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.908550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.908745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.908775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.908935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.908961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.909127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.909154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.909344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.909371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.909530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.909558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.909749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.909776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.909942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.909970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.910114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.910141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.910389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.910416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.910583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.910610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.910801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.910828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.910970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.910997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.911130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.911158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.911331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.911359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.911558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.911585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.911755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.911783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.911975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.912162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.912336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.912530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.912753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.912922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.912950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.913116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.913143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.913307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.913334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.913504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.913531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.913700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.913726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.913899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.913930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.914108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.914135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.914322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.938 [2024-07-15 17:47:43.914349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.938 qpair failed and we were unable to recover it. 00:24:48.938 [2024-07-15 17:47:43.914510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.914536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.914698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.914725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.914893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.914921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.915084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.915111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.915308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.915335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.915502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.915529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.915689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.915715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.915906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.915933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.916076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.916103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.916273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.916299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.916492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.916519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.916660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.916692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.916892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.916920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.917078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.917106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.917272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.917300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.917439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.917468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.917643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.917670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.917838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.917865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.918064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.918091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.918283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.918310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.918501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.918528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.918666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.918694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.918863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.918900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.919932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.920122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.920149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.920308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.920335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.920498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.920524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.920679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.920706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.920849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.920900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.921085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.921115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.921256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.921283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.921454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.921481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.921646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.921672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.921871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.921904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.922095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.939 [2024-07-15 17:47:43.922122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.939 qpair failed and we were unable to recover it. 00:24:48.939 [2024-07-15 17:47:43.922317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.922343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.922489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.922518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effb8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.922663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.922690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.922853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.922886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.923923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.923951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.924108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.924135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.924303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.924336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.924531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.924558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.924694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.924721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.924889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.924917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.925075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.925102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.925236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.925264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.925455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.925482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.925651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.925679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.925850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.925885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.926056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.926083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.926278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.926305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.926490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.926516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.926670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.926889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.926917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.927113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.927140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.927311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.927337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.927505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.927532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.927700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.927726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.927898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.927925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.928093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.928120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.928278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.928304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.928473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.928500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.928692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.940 [2024-07-15 17:47:43.928719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.940 qpair failed and we were unable to recover it. 00:24:48.940 [2024-07-15 17:47:43.928884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.928912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.929970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.929996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.930191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.930218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.930354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.930382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.930575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.930601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.930756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.930782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.930990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.931017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.931210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.931236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.931380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.931406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.931568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.931594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.931783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.931810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.931976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.932167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.932332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.932554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.932746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.932963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.932990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.933181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.933207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.933341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.933368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.933515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.941 [2024-07-15 17:47:43.933542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.941 qpair failed and we were unable to recover it. 00:24:48.941 [2024-07-15 17:47:43.933707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.933735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.933928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.933956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.934147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.934173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.934364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.934391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.934555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.934581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.934779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.934805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.935033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.935221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.935419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.935638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.935820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.935988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.936015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.936191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.936217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.936384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.936409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.936573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.936599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.936766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.936792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.936989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.937016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.937191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.937217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.937386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.937411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.937580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.937615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.937780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.937806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.937974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.938175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.938392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.938587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.938744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.938911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.938939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.939128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.939155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.939336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.939363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.939519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.939546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.939746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.939773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.939929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.939958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.940124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.940151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.940353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.940381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.940547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.940574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.940714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.940742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.940936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.940964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.942 [2024-07-15 17:47:43.941150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.942 [2024-07-15 17:47:43.941178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.942 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.941339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.941366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.941535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.941722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.941749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.941895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.941924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.942116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.942142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.942288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.942314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.942510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.942538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.942732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.942760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.942939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.942967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.943137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.943164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.943342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.943368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.943560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.943586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.943779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.943804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.943944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.943971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.944144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.944169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.944329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.944356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.944494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.944520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.944685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.944711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.944904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.944931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.945129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.945155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.945344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.945370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.945511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.945541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.945679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.945704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.945870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.945903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.946064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.946090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.946224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.946250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.946394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.946421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.946588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.946614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.946802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.946828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.947041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.947068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.947236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.947262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.947488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.947515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.947705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.943 [2024-07-15 17:47:43.947731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.943 qpair failed and we were unable to recover it. 00:24:48.943 [2024-07-15 17:47:43.947872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.947915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.948083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.948109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.948280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.948307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.948521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.948547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.948736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.948762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.948954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.948981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.949151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.949176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.949305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.949331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.949521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.949547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.949715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.949741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.949908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.949934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.950099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.950125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.950293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.950319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.950533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.950559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.950733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.950758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.950895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.950922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.951114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.951141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.951327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.951353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.951569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.951594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.951757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.951783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.951945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.951972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.952138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.952165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.952357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.952383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.952531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.952557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.952717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.952743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.952987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.953014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.953184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.953210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.953354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.953381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.953592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.953622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.953789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.953815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.953985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.954012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.954177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.954204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.954367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.954393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.954582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.954608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.954861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.954901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.955065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.955092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.955271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.955297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.955467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.955493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.955654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.955680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.955826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.955852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.944 [2024-07-15 17:47:43.956049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.944 [2024-07-15 17:47:43.956076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.944 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.956248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.956273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.956422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.956448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.956639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.956665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.956803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.956828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.956995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.957187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.957374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.957537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.957729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.957901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.957927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.958095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.958121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.958288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.958314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.958475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.958501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.958669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.958696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.958869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.958903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.959046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.959075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.959244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.959271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.959406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.959433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.959604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.959631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.959825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.959852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.960080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.960298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.960493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.960648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.960805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.960999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.961196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.961387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.961551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.961768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.961959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.961986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.945 [2024-07-15 17:47:43.962179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.945 [2024-07-15 17:47:43.962205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.945 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.962350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.962377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.962566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.962593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.962739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.962766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.962982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.963175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.963363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.963558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.963743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.963913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.963940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.964114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.964141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.964286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.964312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.964453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.964482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.964657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.964683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.964849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.964884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.965031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.965058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.965206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.965234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.965391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.965418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.965558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.965585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.965779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.965805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.966943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.966970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.967138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.967164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.967307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.967334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.967464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.967491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.967654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.967681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.967852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.967885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.946 [2024-07-15 17:47:43.968057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.946 [2024-07-15 17:47:43.968084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.946 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.968255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.968283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.968428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.968455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.968627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.968822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.968849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.969955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.969982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.970173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.970199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.970365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.970391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.970535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.970561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.970720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.970746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.970893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.970921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.971090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.971117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.971286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.971313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.971479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.971506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.971678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.971705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.971873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.971914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.972072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.972098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.972289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.972316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.972478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.972505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.972665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.972691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.972829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.972856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.973936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.973963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.974130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.974158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.974351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.974378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.974546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.974573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.974733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.974760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.974920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.974948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.975100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.975127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.975265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.975292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.975436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.947 [2024-07-15 17:47:43.975463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.947 qpair failed and we were unable to recover it. 00:24:48.947 [2024-07-15 17:47:43.975622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.975649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.975783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.975810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.975996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.976156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.976374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.976537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.976739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.976930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.976958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.977124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.977152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.977323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.977350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.977493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.977521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.977681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.977708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.977857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.977890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.978057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.978085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.978247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.978274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.978416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.978443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.978634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.978661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.978823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.978851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.979948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.979976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.980149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.980176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.980370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.980397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.980568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.980595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.980764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.980792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.980967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.980995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.981191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.981217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.981383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.981410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.981575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.981602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.981769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.981796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.981961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.981989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.982127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.982154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.982347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.982374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.982543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.982571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.982769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.982796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.982960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.982988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.983140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.948 [2024-07-15 17:47:43.983167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.948 qpair failed and we were unable to recover it. 00:24:48.948 [2024-07-15 17:47:43.983357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.983384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.983546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.983573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.983742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.983933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.983961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.984118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.984145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.984290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.984328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.984486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.984513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.984708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.984734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.984890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.984918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.985089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.985116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.985288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.985315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.985484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.985511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.985644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.985670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.985837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.985864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.986970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.986998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.987141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.987168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.987350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.987377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.987550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.987577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.987740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.987767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.987908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.987937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.988110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.988138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.988328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.988355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.988486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.988513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.988641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.988669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.988838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.988865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.989035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.989063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.989230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.989259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.989435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.989462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.989602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.989630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.989822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.989848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.990045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.990072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.990266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.990293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.990480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.990507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.990670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.990696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.949 qpair failed and we were unable to recover it. 00:24:48.949 [2024-07-15 17:47:43.990861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.949 [2024-07-15 17:47:43.990895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.991089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.991115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.991269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.991295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.991454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.991481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.991641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.991835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.991863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.992071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.992254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.992466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.992649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.992838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.992987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.993158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.993329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.993517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.993704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.993888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.993915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.994065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.994091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.994252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.994279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.994468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.994494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.994690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.994717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.994850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.994889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.995075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.995228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.995393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.995586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.995976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.996138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.996325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.996546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.996715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.996887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.996915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.950 qpair failed and we were unable to recover it. 00:24:48.950 [2024-07-15 17:47:43.997054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.950 [2024-07-15 17:47:43.997081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.997291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.997318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.997455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.997481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.997646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.997672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.997841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.997867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.998909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.998936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.999094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.999120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.999318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.999344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.999502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.999532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.999693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.999719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:43.999864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:43.999906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.000949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.000977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.001147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.001172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.001302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.001327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.001562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.001587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.001754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.001779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.001913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.001939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.951 [2024-07-15 17:47:44.002783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 wit[2024-07-15 17:47:44.002784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsh addr=10.0.0.2, port=4420 00:24:48.951 at runtime. 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.951 [2024-07-15 17:47:44.002813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.951 [2024-07-15 17:47:44.002824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.951 [2024-07-15 17:47:44.002948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.002916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:48.951 [2024-07-15 17:47:44.002976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.002946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:48.951 [2024-07-15 17:47:44.002992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:48.951 [2024-07-15 17:47:44.002995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:48.951 [2024-07-15 17:47:44.003151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.003327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.003353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.003496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.003522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.003662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.003688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.951 [2024-07-15 17:47:44.003857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.951 [2024-07-15 17:47:44.003893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.951 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.004892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.004920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.005138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.005333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.005521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.005680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.005836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.005974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.006000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.006150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.006176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.006346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.006372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.006551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.006577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.006722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.006759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.006985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.007149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.007375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.007574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.007763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.007924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.007950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.008085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.008111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.008292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.008318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.008465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.008491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.008656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.008682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.008830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.008856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.009000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.009028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.009192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.009218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.009396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.009422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.009637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.009663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.009798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.009823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.010000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.010026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.010187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.010212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.010492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.010519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.010678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.010704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.010884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.010911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.011087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.011113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.011250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.011277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.011455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.952 [2024-07-15 17:47:44.011481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.952 qpair failed and we were unable to recover it. 00:24:48.952 [2024-07-15 17:47:44.011649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.011676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.011804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.011830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.011985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.012143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.012312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.012493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.012679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.012845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.012871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.013025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.013051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.013208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.013234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.013441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.013468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.013607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.013634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.013811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.014899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.014936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.015092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.015118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.015254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.015281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.015469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.015689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.015716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.015848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.015874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.016207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.016676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.016844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.016992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.017164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.017366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.017541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.017716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.017917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.017943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.018132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.018158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.018318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.018344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.018532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.018559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.018733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.018759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.018936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.018962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.953 [2024-07-15 17:47:44.019098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.953 [2024-07-15 17:47:44.019128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.953 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.019337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.019364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.019602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.019629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.019766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.019792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.019934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.019961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.020127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.020152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.020348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.020375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.020510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.020537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:48.954 [2024-07-15 17:47:44.020670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.954 [2024-07-15 17:47:44.020696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:48.954 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.020827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.221 [2024-07-15 17:47:44.021937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.221 [2024-07-15 17:47:44.021964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.221 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.022111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.022138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.022290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.022485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.022512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.022680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.022707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.022875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.022920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.023929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.023955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.024147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.024181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.024335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.024361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.024507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.024533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.024697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.024724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.024869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.024901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.025316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.025502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.025656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.025821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.025980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.026141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.026335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.026505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.026670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.026843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.026870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.027192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.027219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.027453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.027480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.027654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.027680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.027850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.027881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.222 [2024-07-15 17:47:44.028912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.222 [2024-07-15 17:47:44.028939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.222 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.029923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.029950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.030942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.030970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.031142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.031306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.031465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.031796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.031990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.032145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.032315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.032496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.032683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.032923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.032950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.033111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.033137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.033280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.033306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.033438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.033465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.033641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.033668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.033826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.033852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.034051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.034078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.034320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.034479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.034506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.034695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.034721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.034892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.034920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.035126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.035479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.035673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.223 [2024-07-15 17:47:44.035835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.223 qpair failed and we were unable to recover it. 00:24:49.223 [2024-07-15 17:47:44.035979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.036167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.036346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.036546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.036705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.036903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.036931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.037971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.037998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.038135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.038162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.038397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.038424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.038588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.038614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.038804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.038831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.038976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.039181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.039369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.039535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.039692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.039889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.039916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.040952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.040979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.041161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.041317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.041486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.041677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.041846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.041987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.042015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.042151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.042178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.042418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.042445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.042591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.042617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.042778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.042805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.224 qpair failed and we were unable to recover it. 00:24:49.224 [2024-07-15 17:47:44.042987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.224 [2024-07-15 17:47:44.043014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.043175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.043202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.043345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.043371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.043510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.043536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.043744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.043770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.043934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.043962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.044116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.044278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.044450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.044607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.044819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.044985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.045169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.045331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.045494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.045760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.045934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.046132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.046294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.046459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.046646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.046816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.046975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.047001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.047133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.047159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.047428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.047454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.047612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.047638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.047782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.047809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.048919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.048946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.049955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.049982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.050118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.225 [2024-07-15 17:47:44.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.225 qpair failed and we were unable to recover it. 00:24:49.225 [2024-07-15 17:47:44.050382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.050408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.050579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.050606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.050775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.050802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.051909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.051936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.052950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.052976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.053107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.053133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.053298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.053325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.053453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.053479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.053646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.053816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.053843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.054023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.054049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.054204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.054241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.054406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.054433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.054678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.054704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.054868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.054900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.055046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.055254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.055280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.226 [2024-07-15 17:47:44.055419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.226 [2024-07-15 17:47:44.055446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.226 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.055661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.055688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.055850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.055882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.056934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.056962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.057128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.057155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.057291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.057317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.057494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.057708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.057734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.057926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.057954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.058098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.058125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.058272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.058297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.058449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.058476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.058668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.058694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.058842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.058881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.059944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.059970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.227 qpair failed and we were unable to recover it. 00:24:49.227 [2024-07-15 17:47:44.060112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.227 [2024-07-15 17:47:44.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.060287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.060314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.060503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.060528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.060661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.060687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.060835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.060859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.061955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.061982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.062142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.062168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.062324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.062350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.062511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.062536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.062712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.062737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.062885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.062910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.063053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.063079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.063215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.063431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.063455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.063638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.063663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.063823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.063848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.064013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.064038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.064197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.064222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.064390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.064415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.064592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.228 [2024-07-15 17:47:44.064618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.228 qpair failed and we were unable to recover it. 00:24:49.228 [2024-07-15 17:47:44.064758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.064783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.064946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.064972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.065107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.065132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.065310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.065334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.065570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.065594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.065750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.065775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.065938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.065965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.066100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.066125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.066296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.066321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.066484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.066509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.066642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.066667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.066821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.066847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.067920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.067945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.068107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.068132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.068281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.068306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.068466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.068492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.068665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.068690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.068826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.068851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.069087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.069113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.069350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.069376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.069537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.069562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.069699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.229 [2024-07-15 17:47:44.069724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.229 qpair failed and we were unable to recover it. 00:24:49.229 [2024-07-15 17:47:44.069869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.069900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.070923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.070949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.071105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.071271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.071296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.071455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.071480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.071640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.071665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.071852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.071881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.072931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.072965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.073138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.073304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.073474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.073678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.073840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.073986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.074012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.074155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.074180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.074370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.230 [2024-07-15 17:47:44.074395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.230 qpair failed and we were unable to recover it. 00:24:49.230 [2024-07-15 17:47:44.074540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.074566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.074745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.074770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.074948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.074974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.075943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.075968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.076962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.076988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.077120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.077145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.077376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.077400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.077595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.077620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.077752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.077777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.077925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.077951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.078113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.078137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.078299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.078324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.078492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.078517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.078679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.078704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.078883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.078908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.079089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.079114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.231 qpair failed and we were unable to recover it. 00:24:49.231 [2024-07-15 17:47:44.079245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.231 [2024-07-15 17:47:44.079270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.079421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.079446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.079584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.079748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.079773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.079918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.079945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.080116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.080141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.080285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.080309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.080439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.080464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.080605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.080632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.080794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.080819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.081051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.081094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.081276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.081304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.081480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.081506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.081655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.081682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.081828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.081871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.082853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.082884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.083063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.083088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.083221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.083246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.083371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.083396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.083573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.232 [2024-07-15 17:47:44.083741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.232 [2024-07-15 17:47:44.083766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.232 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.083905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.083941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.084107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.084281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.084469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.084634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.084816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.084988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.085152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.085315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.085488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.085641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.085827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.085852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.086942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.086967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.087135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.087160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.087348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.087373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.087504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.087529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.233 qpair failed and we were unable to recover it. 00:24:49.233 [2024-07-15 17:47:44.087692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.233 [2024-07-15 17:47:44.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.087841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.087866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.088945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.088971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.089133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.089158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.089328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.089353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.089483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.089507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.089666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.089691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.089824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.089849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.090905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.090930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.091967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.091992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.092130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.092155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.092331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.092357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.234 qpair failed and we were unable to recover it. 00:24:49.234 [2024-07-15 17:47:44.092490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.234 [2024-07-15 17:47:44.092515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.092658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.092682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.092875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.092910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.093930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.093972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.094166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.094203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.094366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.094392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.094556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.094582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.094744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.094770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.094944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.094972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.095121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.095148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.095317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.095344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.095485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.095510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.095675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.095700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.095836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.095861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.096840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.096867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.097054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.097080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.235 [2024-07-15 17:47:44.097214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.235 [2024-07-15 17:47:44.097240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.235 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.097377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.097403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.097537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.097562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.097697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.097722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.097866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.097899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.098971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.098997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.099159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.099184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.099340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.099365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.099540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.099565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.099696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.099721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.099889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.099914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.100946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.100972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.101150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.101181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.101320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.101344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.101534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.101559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.101697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.101722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.101856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.101885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.102857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.102887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.103025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.103050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.103193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.236 [2024-07-15 17:47:44.103218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.236 qpair failed and we were unable to recover it. 00:24:49.236 [2024-07-15 17:47:44.103384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.103409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.103578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.103608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.103777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.103937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.103963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.104115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.104313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.104661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.104825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.104976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.105180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.105398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.105572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.105736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.105905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.105936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.106142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.106298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.106496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.106698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.106853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.106994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.107182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.107334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.107518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.107681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.107865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.107896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.108097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.108278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.108443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.108606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.108818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.108992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.109020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.109190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.109216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.109351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.109376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.109531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.109557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 A controller has encountered a failure and is being reset. 00:24:49.237 [2024-07-15 17:47:44.109780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.109819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.109993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.110022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.110192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.110218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.110384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.110409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.110560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.237 [2024-07-15 17:47:44.110585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.237 qpair failed and we were unable to recover it. 00:24:49.237 [2024-07-15 17:47:44.110727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.110752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.110912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.110938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.111082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.111109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.111275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.111458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.111483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.111633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.111672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.111830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.111858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.112883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.112909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.113071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.113096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.113271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.113296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.113460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.113485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.113641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.113666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.113858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.114917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.114942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.115101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.115125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.115296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.115321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.115490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.115514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.115644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.115669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.115801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.116022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.238 [2024-07-15 17:47:44.116061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.238 qpair failed and we were unable to recover it. 00:24:49.238 [2024-07-15 17:47:44.116208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.116238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.116381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.116408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.116576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.116770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.116935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.116961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.117125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.117150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.117338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.117363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52200 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.117500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.117527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.117696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.117722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.117864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.117897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.118059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.118230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.118257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc0000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.118406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.118436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effc8000b90 with addr=10.0.0.2, port=4420 00:24:49.239 qpair failed and we were unable to recover it. 00:24:49.239 [2024-07-15 17:47:44.118608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.239 [2024-07-15 17:47:44.118644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c600e0 with addr=10.0.0.2, port=4420 00:24:49.239 [2024-07-15 17:47:44.118663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c600e0 is same with the state(5) to be set 00:24:49.239 [2024-07-15 17:47:44.118691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c600e0 (9): Bad file descriptor 00:24:49.239 [2024-07-15 17:47:44.118711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.239 [2024-07-15 17:47:44.118725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.239 [2024-07-15 17:47:44.118743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.239 Unable to reset the controller. 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.806 Malloc0 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.806 [2024-07-15 17:47:44.809245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.806 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.807 [2024-07-15 17:47:44.837498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.807 17:47:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2338081 00:24:50.373 Controller properly reset. 00:24:55.646 Initializing NVMe Controllers 00:24:55.646 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:55.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:55.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:55.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:55.646 Initialization complete. Launching workers. 00:24:55.646 Starting thread on core 1 00:24:55.646 Starting thread on core 2 00:24:55.646 Starting thread on core 3 00:24:55.646 Starting thread on core 0 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:55.646 00:24:55.646 real 0m11.321s 00:24:55.646 user 0m34.906s 00:24:55.646 sys 0m7.759s 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.646 ************************************ 00:24:55.646 END TEST nvmf_target_disconnect_tc2 00:24:55.646 ************************************ 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.646 rmmod nvme_tcp 00:24:55.646 rmmod nvme_fabrics 00:24:55.646 rmmod nvme_keyring 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2338607 ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2338607 ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338607' 00:24:55.646 killing process with pid 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2338607 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.646 17:47:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.560 17:47:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.560 00:24:57.560 real 0m16.231s 00:24:57.560 user 1m0.315s 00:24:57.560 sys 0m10.278s 00:24:57.560 17:47:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.560 17:47:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:57.560 ************************************ 00:24:57.560 END TEST nvmf_target_disconnect 00:24:57.560 ************************************ 00:24:57.560 17:47:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:57.560 17:47:52 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:57.560 17:47:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.560 17:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.560 17:47:52 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:57.560 00:24:57.560 real 19m32.088s 00:24:57.560 user 46m17.029s 00:24:57.560 sys 5m0.290s 00:24:57.560 17:47:52 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.560 17:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.560 ************************************ 00:24:57.560 END TEST nvmf_tcp 00:24:57.560 ************************************ 00:24:57.560 17:47:52 -- common/autotest_common.sh@1142 -- # return 0 00:24:57.560 17:47:52 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:24:57.560 17:47:52 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:57.560 17:47:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:57.560 17:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.560 17:47:52 -- common/autotest_common.sh@10 -- # set +x 00:24:57.560 ************************************ 00:24:57.560 START TEST spdkcli_nvmf_tcp 00:24:57.560 ************************************ 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:57.560 * Looking for test storage... 00:24:57.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:57.560 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2339694 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2339694 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2339694 ']' 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.561 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.561 [2024-07-15 17:47:52.667892] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:24:57.561 [2024-07-15 17:47:52.667966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339694 ] 00:24:57.820 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.820 [2024-07-15 17:47:52.735201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:57.820 [2024-07-15 17:47:52.851910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.820 [2024-07-15 17:47:52.851914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.100 17:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.101 17:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:58.101 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:58.101 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:58.101 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:58.101 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:58.101 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:58.101 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:58.101 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:58.101 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:58.101 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:58.101 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:58.101 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:58.101 ' 00:25:00.633 [2024-07-15 17:47:55.537256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.006 [2024-07-15 17:47:56.777614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:04.541 [2024-07-15 17:47:59.068749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:05.920 [2024-07-15 17:48:01.039183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:07.885 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:07.885 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:07.885 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.885 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.885 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:07.885 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:07.885 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:07.885 17:48:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.144 17:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:08.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:08.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:08.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:08.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:08.144 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:08.144 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:08.144 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:08.144 ' 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:13.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:13.411 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:13.411 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:13.411 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2339694 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2339694 ']' 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2339694 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2339694 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2339694' 00:25:13.411 killing process with pid 2339694 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2339694 00:25:13.411 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2339694 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2339694 ']' 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2339694 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2339694 ']' 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2339694 00:25:13.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2339694) - No such process 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2339694 is not found' 00:25:13.670 Process with pid 2339694 is not found 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:13.670 00:25:13.670 real 0m16.077s 00:25:13.670 user 0m33.932s 00:25:13.670 sys 0m0.818s 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.670 17:48:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.670 ************************************ 00:25:13.670 END TEST spdkcli_nvmf_tcp 00:25:13.670 ************************************ 00:25:13.670 17:48:08 -- common/autotest_common.sh@1142 -- # return 0 00:25:13.670 17:48:08 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.670 17:48:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.670 17:48:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.670 17:48:08 -- common/autotest_common.sh@10 -- # set +x 00:25:13.670 ************************************ 00:25:13.670 START TEST nvmf_identify_passthru 00:25:13.671 ************************************ 00:25:13.671 17:48:08 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:13.671 * Looking for test storage... 00:25:13.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.671 17:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.671 17:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:13.671 17:48:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.671 17:48:08 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.671 17:48:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:13.671 17:48:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:13.671 17:48:08 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:13.671 17:48:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:15.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:15.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:15.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:15.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.576 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:25:15.835 00:25:15.835 --- 10.0.0.2 ping statistics --- 00:25:15.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.835 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:15.835 00:25:15.835 --- 10.0.0.1 ping statistics --- 00:25:15.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.835 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.835 17:48:10 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.835 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:15.835 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:15.835 17:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:15.835 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:15.835 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:15.835 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:15.836 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:15.836 17:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:15.836 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.029 17:48:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:20.029 17:48:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:20.029 17:48:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:20.029 17:48:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:20.029 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2344925 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.219 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2344925 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2344925 ']' 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.219 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.219 [2024-07-15 17:48:19.353843] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:25:24.219 [2024-07-15 17:48:19.353966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.479 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.479 [2024-07-15 17:48:19.420667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.479 [2024-07-15 17:48:19.530491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.479 [2024-07-15 17:48:19.530552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.479 [2024-07-15 17:48:19.530580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.479 [2024-07-15 17:48:19.530592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.479 [2024-07-15 17:48:19.530601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.479 [2024-07-15 17:48:19.530687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.479 [2024-07-15 17:48:19.531010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.479 [2024-07-15 17:48:19.531034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.479 [2024-07-15 17:48:19.531037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:24.479 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.479 INFO: Log level set to 20 00:25:24.479 INFO: Requests: 00:25:24.479 { 00:25:24.479 "jsonrpc": "2.0", 00:25:24.479 "method": "nvmf_set_config", 00:25:24.479 "id": 1, 00:25:24.479 "params": { 00:25:24.479 "admin_cmd_passthru": { 00:25:24.479 "identify_ctrlr": true 00:25:24.479 } 00:25:24.479 } 00:25:24.479 } 00:25:24.479 00:25:24.479 INFO: response: 00:25:24.479 { 00:25:24.479 "jsonrpc": "2.0", 00:25:24.479 "id": 1, 00:25:24.479 "result": true 00:25:24.479 } 00:25:24.479 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.479 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.479 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.479 INFO: Setting log level to 20 00:25:24.479 INFO: Setting log level to 20 00:25:24.479 INFO: Log level set to 20 00:25:24.479 INFO: Log level set to 20 00:25:24.479 INFO: Requests: 00:25:24.479 { 00:25:24.479 "jsonrpc": "2.0", 00:25:24.479 "method": "framework_start_init", 00:25:24.479 "id": 1 00:25:24.479 } 00:25:24.479 00:25:24.479 INFO: Requests: 00:25:24.479 { 00:25:24.479 "jsonrpc": "2.0", 00:25:24.479 "method": "framework_start_init", 00:25:24.479 "id": 1 00:25:24.479 } 00:25:24.479 00:25:24.737 [2024-07-15 17:48:19.687226] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:24.737 INFO: response: 00:25:24.737 { 00:25:24.737 "jsonrpc": "2.0", 00:25:24.737 "id": 1, 00:25:24.737 "result": true 00:25:24.737 } 00:25:24.737 00:25:24.737 INFO: response: 00:25:24.737 { 00:25:24.737 "jsonrpc": "2.0", 00:25:24.737 "id": 1, 00:25:24.737 "result": true 00:25:24.737 } 00:25:24.737 00:25:24.737 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.737 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:24.737 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.738 INFO: Setting log level to 40 00:25:24.738 INFO: Setting log level to 40 00:25:24.738 INFO: Setting log level to 40 00:25:24.738 [2024-07-15 17:48:19.697413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.738 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:24.738 17:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.738 17:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 Nvme0n1 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.020 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.020 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.020 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 [2024-07-15 17:48:22.589326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.020 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.020 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 [ 00:25:28.020 { 00:25:28.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:28.020 "subtype": "Discovery", 00:25:28.020 "listen_addresses": [], 00:25:28.020 "allow_any_host": true, 00:25:28.020 "hosts": [] 00:25:28.020 }, 00:25:28.020 { 00:25:28.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.020 "subtype": "NVMe", 00:25:28.020 "listen_addresses": [ 00:25:28.020 { 00:25:28.020 "trtype": "TCP", 00:25:28.020 "adrfam": "IPv4", 00:25:28.020 "traddr": "10.0.0.2", 00:25:28.020 "trsvcid": "4420" 00:25:28.020 } 00:25:28.020 ], 00:25:28.020 "allow_any_host": true, 00:25:28.021 "hosts": [], 00:25:28.021 "serial_number": "SPDK00000000000001", 00:25:28.021 "model_number": "SPDK bdev Controller", 00:25:28.021 "max_namespaces": 1, 00:25:28.021 "min_cntlid": 1, 00:25:28.021 "max_cntlid": 65519, 00:25:28.021 "namespaces": [ 00:25:28.021 { 00:25:28.021 "nsid": 1, 00:25:28.021 "bdev_name": "Nvme0n1", 00:25:28.021 "name": "Nvme0n1", 00:25:28.021 "nguid": "A05E1BEBBE314FB293D192DA920685F9", 00:25:28.021 "uuid": "a05e1beb-be31-4fb2-93d1-92da920685f9" 00:25:28.021 } 00:25:28.021 ] 00:25:28.021 } 00:25:28.021 ] 00:25:28.021 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:28.021 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:28.021 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.021 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.021 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.021 17:48:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:28.021 17:48:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.021 17:48:22 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.021 rmmod nvme_tcp 00:25:28.021 rmmod nvme_fabrics 00:25:28.021 rmmod nvme_keyring 00:25:28.021 17:48:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.021 17:48:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:28.021 17:48:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:28.021 17:48:23 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2344925 ']' 00:25:28.021 17:48:23 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2344925 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2344925 ']' 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2344925 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344925 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344925' 00:25:28.021 killing process with pid 2344925 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2344925 00:25:28.021 17:48:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2344925 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.937 17:48:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.937 17:48:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.937 17:48:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.850 17:48:26 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.850 00:25:31.850 real 0m17.992s 00:25:31.850 user 0m26.736s 00:25:31.850 sys 0m2.222s 00:25:31.850 17:48:26 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:31.850 17:48:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.850 ************************************ 00:25:31.850 END TEST nvmf_identify_passthru 00:25:31.850 ************************************ 00:25:31.850 17:48:26 -- common/autotest_common.sh@1142 -- # return 0 00:25:31.850 17:48:26 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:31.850 17:48:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:31.850 17:48:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.850 17:48:26 -- common/autotest_common.sh@10 -- # set +x 00:25:31.850 ************************************ 00:25:31.850 START TEST nvmf_dif 00:25:31.850 ************************************ 00:25:31.850 17:48:26 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:31.850 * Looking for test storage... 00:25:31.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.850 17:48:26 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.850 17:48:26 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.850 17:48:26 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.850 17:48:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.850 17:48:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.850 17:48:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.850 17:48:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:31.850 17:48:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:31.850 17:48:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.850 17:48:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:31.850 17:48:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.850 17:48:26 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.850 17:48:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:33.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:33.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.810 17:48:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:33.811 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:33.811 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:25:33.811 00:25:33.811 --- 10.0.0.2 ping statistics --- 00:25:33.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.811 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:25:33.811 00:25:33.811 --- 10.0.0.1 ping statistics --- 00:25:33.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.811 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:33.811 17:48:28 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:34.746 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:34.746 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:34.746 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:34.746 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:34.746 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:34.746 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:34.746 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:34.746 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:34.746 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:34.746 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:34.746 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:34.746 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:34.746 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:34.746 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:34.746 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:34.747 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:34.747 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:35.004 17:48:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:35.004 17:48:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2348080 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:35.004 17:48:29 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2348080 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2348080 ']' 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.004 17:48:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.004 [2024-07-15 17:48:30.044960] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:25:35.004 [2024-07-15 17:48:30.045058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.004 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.004 [2024-07-15 17:48:30.108312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.261 [2024-07-15 17:48:30.213249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.261 [2024-07-15 17:48:30.213310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.261 [2024-07-15 17:48:30.213340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.261 [2024-07-15 17:48:30.213351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.261 [2024-07-15 17:48:30.213361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.261 [2024-07-15 17:48:30.213386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:35.261 17:48:30 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.261 17:48:30 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.261 17:48:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:35.261 17:48:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.261 [2024-07-15 17:48:30.352405] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.261 17:48:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.261 17:48:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:35.261 ************************************ 00:25:35.261 START TEST fio_dif_1_default 00:25:35.261 ************************************ 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:35.261 bdev_null0 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:35.261 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:35.518 [2024-07-15 17:48:30.408676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:35.518 { 00:25:35.518 "params": { 00:25:35.518 "name": "Nvme$subsystem", 00:25:35.518 "trtype": "$TEST_TRANSPORT", 00:25:35.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.518 "adrfam": "ipv4", 00:25:35.518 "trsvcid": "$NVMF_PORT", 00:25:35.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.518 "hdgst": ${hdgst:-false}, 00:25:35.518 "ddgst": ${ddgst:-false} 00:25:35.518 }, 00:25:35.518 "method": "bdev_nvme_attach_controller" 00:25:35.518 } 00:25:35.518 EOF 00:25:35.518 )") 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:35.518 "params": { 00:25:35.518 "name": "Nvme0", 00:25:35.518 "trtype": "tcp", 00:25:35.518 "traddr": "10.0.0.2", 00:25:35.518 "adrfam": "ipv4", 00:25:35.518 "trsvcid": "4420", 00:25:35.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:35.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:35.518 "hdgst": false, 00:25:35.518 "ddgst": false 00:25:35.518 }, 00:25:35.518 "method": "bdev_nvme_attach_controller" 00:25:35.518 }' 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:35.518 17:48:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.775 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:35.775 fio-3.35 00:25:35.775 Starting 1 thread 00:25:35.775 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.956 00:25:47.956 filename0: (groupid=0, jobs=1): err= 0: pid=2348305: Mon Jul 15 17:48:41 2024 00:25:47.956 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10028msec) 00:25:47.956 slat (nsec): min=4277, max=51151, avg=9534.09, stdev=3037.99 00:25:47.956 clat (usec): min=40880, max=46645, avg=41236.46, stdev=556.56 00:25:47.956 lat (usec): min=40888, max=46660, avg=41245.99, stdev=556.64 00:25:47.956 clat percentiles (usec): 00:25:47.956 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:47.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:47.956 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:25:47.956 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:25:47.956 | 99.99th=[46400] 00:25:47.956 bw ( KiB/s): min= 352, max= 416, per=99.82%, avg=387.20, stdev=14.31, samples=20 00:25:47.956 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:25:47.956 lat (msec) : 50=100.00% 00:25:47.956 cpu : usr=88.82%, sys=10.85%, ctx=7, majf=0, minf=234 00:25:47.956 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.956 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.956 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:47.956 00:25:47.956 Run status group 0 (all jobs): 00:25:47.956 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10028-10028msec 00:25:47.956 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:47.956 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:47.956 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.956 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 00:25:47.957 real 0m11.040s 00:25:47.957 user 0m9.974s 00:25:47.957 sys 0m1.355s 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 ************************************ 00:25:47.957 END TEST fio_dif_1_default 00:25:47.957 ************************************ 00:25:47.957 17:48:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:47.957 17:48:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:47.957 17:48:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:47.957 17:48:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 ************************************ 00:25:47.957 START TEST fio_dif_1_multi_subsystems 00:25:47.957 ************************************ 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 bdev_null0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 [2024-07-15 17:48:41.503971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 bdev_null1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:47.957 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:47.958 { 00:25:47.958 "params": { 00:25:47.958 "name": "Nvme$subsystem", 00:25:47.958 "trtype": "$TEST_TRANSPORT", 00:25:47.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.958 "adrfam": "ipv4", 00:25:47.958 "trsvcid": "$NVMF_PORT", 00:25:47.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.958 "hdgst": ${hdgst:-false}, 00:25:47.958 "ddgst": ${ddgst:-false} 00:25:47.958 }, 00:25:47.958 "method": "bdev_nvme_attach_controller" 00:25:47.958 } 00:25:47.958 EOF 00:25:47.958 )") 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:47.958 { 00:25:47.958 "params": { 00:25:47.958 "name": "Nvme$subsystem", 00:25:47.958 "trtype": "$TEST_TRANSPORT", 00:25:47.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.958 "adrfam": "ipv4", 00:25:47.958 "trsvcid": "$NVMF_PORT", 00:25:47.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.958 "hdgst": ${hdgst:-false}, 00:25:47.958 "ddgst": ${ddgst:-false} 00:25:47.958 }, 00:25:47.958 "method": "bdev_nvme_attach_controller" 00:25:47.958 } 00:25:47.958 EOF 00:25:47.958 )") 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:47.958 "params": { 00:25:47.958 "name": "Nvme0", 00:25:47.958 "trtype": "tcp", 00:25:47.958 "traddr": "10.0.0.2", 00:25:47.958 "adrfam": "ipv4", 00:25:47.958 "trsvcid": "4420", 00:25:47.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.958 "hdgst": false, 00:25:47.958 "ddgst": false 00:25:47.958 }, 00:25:47.958 "method": "bdev_nvme_attach_controller" 00:25:47.958 },{ 00:25:47.958 "params": { 00:25:47.958 "name": "Nvme1", 00:25:47.958 "trtype": "tcp", 00:25:47.958 "traddr": "10.0.0.2", 00:25:47.958 "adrfam": "ipv4", 00:25:47.958 "trsvcid": "4420", 00:25:47.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.958 "hdgst": false, 00:25:47.958 "ddgst": false 00:25:47.958 }, 00:25:47.958 "method": "bdev_nvme_attach_controller" 00:25:47.958 }' 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:47.958 17:48:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.958 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:47.958 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:47.958 fio-3.35 00:25:47.958 Starting 2 threads 00:25:47.958 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.940 00:25:57.940 filename0: (groupid=0, jobs=1): err= 0: pid=2349724: Mon Jul 15 17:48:52 2024 00:25:57.940 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10026msec) 00:25:57.940 slat (nsec): min=7021, max=65877, avg=9675.91, stdev=4182.87 00:25:57.940 clat (usec): min=818, max=42182, avg=21210.53, stdev=20191.61 00:25:57.940 lat (usec): min=825, max=42207, avg=21220.20, stdev=20191.17 00:25:57.940 clat percentiles (usec): 00:25:57.940 | 1.00th=[ 857], 5.00th=[ 881], 10.00th=[ 889], 20.00th=[ 906], 00:25:57.940 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41157], 00:25:57.940 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:25:57.940 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:57.940 | 99.99th=[42206] 00:25:57.940 bw ( KiB/s): min= 672, max= 768, per=49.98%, avg=753.60, stdev=30.22, samples=20 00:25:57.940 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:25:57.940 lat (usec) : 1000=45.02% 00:25:57.940 lat (msec) : 2=4.77%, 50=50.21% 00:25:57.940 cpu : usr=93.58%, sys=6.12%, ctx=12, majf=0, minf=140 00:25:57.940 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:57.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.940 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.940 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:57.940 filename1: (groupid=0, jobs=1): err= 0: pid=2349725: Mon Jul 15 17:48:52 2024 00:25:57.940 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10004msec) 00:25:57.940 slat (nsec): min=6979, max=36329, avg=9098.77, stdev=3065.36 00:25:57.940 clat (usec): min=815, max=42187, avg=21165.35, stdev=20138.80 00:25:57.940 lat (usec): min=823, max=42224, avg=21174.45, stdev=20138.53 00:25:57.941 clat percentiles (usec): 00:25:57.941 | 1.00th=[ 848], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 922], 00:25:57.941 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:25:57.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:25:57.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:57.941 | 99.99th=[42206] 00:25:57.941 bw ( KiB/s): min= 672, max= 768, per=49.98%, avg=753.60, stdev=26.42, samples=20 00:25:57.941 iops : min= 168, max= 192, avg=188.40, stdev= 6.60, samples=20 00:25:57.941 lat (usec) : 1000=46.98% 00:25:57.941 lat (msec) : 2=2.81%, 50=50.21% 00:25:57.941 cpu : usr=93.99%, sys=5.70%, ctx=22, majf=0, minf=137 00:25:57.941 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:57.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.941 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.941 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:57.941 00:25:57.941 Run status group 0 (all jobs): 00:25:57.941 READ: bw=1506KiB/s (1543kB/s), 753KiB/s-755KiB/s (771kB/s-773kB/s), io=14.8MiB (15.5MB), run=10004-10026msec 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 00:25:57.941 real 0m11.438s 00:25:57.941 user 0m20.198s 00:25:57.941 sys 0m1.466s 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 ************************************ 00:25:57.941 END TEST fio_dif_1_multi_subsystems 00:25:57.941 ************************************ 00:25:57.941 17:48:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:57.941 17:48:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:57.941 17:48:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:57.941 17:48:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 ************************************ 00:25:57.941 START TEST fio_dif_rand_params 00:25:57.941 ************************************ 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 bdev_null0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.941 [2024-07-15 17:48:52.991107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.941 { 00:25:57.941 "params": { 00:25:57.941 "name": "Nvme$subsystem", 00:25:57.941 "trtype": "$TEST_TRANSPORT", 00:25:57.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.941 "adrfam": "ipv4", 00:25:57.941 "trsvcid": "$NVMF_PORT", 00:25:57.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.941 "hdgst": ${hdgst:-false}, 00:25:57.941 "ddgst": ${ddgst:-false} 00:25:57.941 }, 00:25:57.941 "method": "bdev_nvme_attach_controller" 00:25:57.941 } 00:25:57.941 EOF 00:25:57.941 )") 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.941 17:48:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:57.942 "params": { 00:25:57.942 "name": "Nvme0", 00:25:57.942 "trtype": "tcp", 00:25:57.942 "traddr": "10.0.0.2", 00:25:57.942 "adrfam": "ipv4", 00:25:57.942 "trsvcid": "4420", 00:25:57.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.942 "hdgst": false, 00:25:57.942 "ddgst": false 00:25:57.942 }, 00:25:57.942 "method": "bdev_nvme_attach_controller" 00:25:57.942 }' 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:57.942 17:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.200 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:58.200 ... 00:25:58.200 fio-3.35 00:25:58.200 Starting 3 threads 00:25:58.200 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.757 00:26:04.757 filename0: (groupid=0, jobs=1): err= 0: pid=2351124: Mon Jul 15 17:48:58 2024 00:26:04.757 read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(108MiB/5046msec) 00:26:04.757 slat (nsec): min=4294, max=49069, avg=14506.68, stdev=4824.52 00:26:04.757 clat (usec): min=4948, max=91352, avg=17389.18, stdev=15384.73 00:26:04.757 lat (usec): min=4959, max=91366, avg=17403.68, stdev=15384.60 00:26:04.757 clat percentiles (usec): 00:26:04.757 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 8356], 00:26:04.757 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[11863], 60.00th=[12780], 00:26:04.757 | 70.00th=[13960], 80.00th=[15926], 90.00th=[50594], 95.00th=[53216], 00:26:04.757 | 99.00th=[56361], 99.50th=[56886], 99.90th=[91751], 99.95th=[91751], 00:26:04.757 | 99.99th=[91751] 00:26:04.757 bw ( KiB/s): min=15872, max=25600, per=30.64%, avg=22118.40, stdev=3015.06, samples=10 00:26:04.757 iops : min= 124, max= 200, avg=172.80, stdev=23.56, samples=10 00:26:04.757 lat (msec) : 10=34.37%, 20=49.83%, 50=4.15%, 100=11.65% 00:26:04.757 cpu : usr=94.61%, sys=4.98%, ctx=11, majf=0, minf=77 00:26:04.757 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 issued rwts: total=867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.757 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:04.757 filename0: (groupid=0, jobs=1): err= 0: pid=2351125: Mon Jul 15 17:48:58 2024 00:26:04.757 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(122MiB/5006msec) 00:26:04.757 slat (usec): min=4, max=425, avg=16.76, stdev=27.26 00:26:04.757 clat (usec): min=5420, max=91926, avg=15341.33, stdev=15431.95 00:26:04.757 lat (usec): min=5432, max=91956, avg=15358.10, stdev=15431.74 00:26:04.757 clat percentiles (usec): 00:26:04.757 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7439], 00:26:04.757 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10683], 00:26:04.757 | 70.00th=[11731], 80.00th=[13042], 90.00th=[49546], 95.00th=[52167], 00:26:04.757 | 99.00th=[56361], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:26:04.757 | 99.99th=[91751] 00:26:04.757 bw ( KiB/s): min=17152, max=37120, per=34.54%, avg=24934.40, stdev=7046.28, samples=10 00:26:04.757 iops : min= 134, max= 290, avg=194.80, stdev=55.05, samples=10 00:26:04.757 lat (msec) : 10=52.00%, 20=35.01%, 50=3.79%, 100=9.21% 00:26:04.757 cpu : usr=80.42%, sys=9.37%, ctx=162, majf=0, minf=167 00:26:04.757 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 issued rwts: total=977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.757 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:04.757 filename0: (groupid=0, jobs=1): err= 0: pid=2351126: Mon Jul 15 17:48:58 2024 00:26:04.757 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(125MiB/5006msec) 00:26:04.757 slat (nsec): min=4490, max=52038, avg=15384.63, stdev=4955.69 00:26:04.757 clat (usec): min=5366, max=92356, avg=14963.47, stdev=14056.12 00:26:04.757 lat (usec): min=5379, max=92370, avg=14978.85, stdev=14056.72 00:26:04.757 clat percentiles (usec): 00:26:04.757 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7832], 00:26:04.757 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10814], 00:26:04.757 | 70.00th=[11731], 80.00th=[13042], 90.00th=[49021], 95.00th=[51119], 00:26:04.757 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[92799], 00:26:04.757 | 99.99th=[92799] 00:26:04.757 bw ( KiB/s): min=20264, max=31488, per=35.43%, avg=25578.40, stdev=3960.78, samples=10 00:26:04.757 iops : min= 158, max= 246, avg=199.80, stdev=30.99, samples=10 00:26:04.757 lat (msec) : 10=52.10%, 20=34.83%, 50=5.59%, 100=7.49% 00:26:04.757 cpu : usr=94.61%, sys=4.64%, ctx=167, majf=0, minf=101 00:26:04.757 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.757 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.757 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:04.757 00:26:04.757 Run status group 0 (all jobs): 00:26:04.757 READ: bw=70.5MiB/s (73.9MB/s), 21.5MiB/s-25.0MiB/s (22.5MB/s-26.2MB/s), io=356MiB (373MB), run=5006-5046msec 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 bdev_null0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 [2024-07-15 17:48:59.245849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 bdev_null1 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 bdev_null2 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.757 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.758 { 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme$subsystem", 00:26:04.758 "trtype": "$TEST_TRANSPORT", 00:26:04.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "$NVMF_PORT", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.758 "hdgst": ${hdgst:-false}, 00:26:04.758 "ddgst": ${ddgst:-false} 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 } 00:26:04.758 EOF 00:26:04.758 )") 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.758 { 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme$subsystem", 00:26:04.758 "trtype": "$TEST_TRANSPORT", 00:26:04.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "$NVMF_PORT", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.758 "hdgst": ${hdgst:-false}, 00:26:04.758 "ddgst": ${ddgst:-false} 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 } 00:26:04.758 EOF 00:26:04.758 )") 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.758 { 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme$subsystem", 00:26:04.758 "trtype": "$TEST_TRANSPORT", 00:26:04.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "$NVMF_PORT", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.758 "hdgst": ${hdgst:-false}, 00:26:04.758 "ddgst": ${ddgst:-false} 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 } 00:26:04.758 EOF 00:26:04.758 )") 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme0", 00:26:04.758 "trtype": "tcp", 00:26:04.758 "traddr": "10.0.0.2", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "4420", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:04.758 "hdgst": false, 00:26:04.758 "ddgst": false 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 },{ 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme1", 00:26:04.758 "trtype": "tcp", 00:26:04.758 "traddr": "10.0.0.2", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "4420", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.758 "hdgst": false, 00:26:04.758 "ddgst": false 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 },{ 00:26:04.758 "params": { 00:26:04.758 "name": "Nvme2", 00:26:04.758 "trtype": "tcp", 00:26:04.758 "traddr": "10.0.0.2", 00:26:04.758 "adrfam": "ipv4", 00:26:04.758 "trsvcid": "4420", 00:26:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:04.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:04.758 "hdgst": false, 00:26:04.758 "ddgst": false 00:26:04.758 }, 00:26:04.758 "method": "bdev_nvme_attach_controller" 00:26:04.758 }' 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:04.758 17:48:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.758 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:04.758 ... 00:26:04.758 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:04.758 ... 00:26:04.758 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:04.758 ... 00:26:04.758 fio-3.35 00:26:04.758 Starting 24 threads 00:26:04.758 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.002 00:26:17.002 filename0: (groupid=0, jobs=1): err= 0: pid=2351985: Mon Jul 15 17:49:10 2024 00:26:17.002 read: IOPS=71, BW=285KiB/s (292kB/s)(2880KiB/10101msec) 00:26:17.002 slat (nsec): min=8306, max=63476, avg=20400.18, stdev=9488.75 00:26:17.002 clat (msec): min=124, max=413, avg=224.30, stdev=46.75 00:26:17.002 lat (msec): min=124, max=413, avg=224.32, stdev=46.75 00:26:17.002 clat percentiles (msec): 00:26:17.002 | 1.00th=[ 126], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 188], 00:26:17.002 | 30.00th=[ 199], 40.00th=[ 211], 50.00th=[ 224], 60.00th=[ 232], 00:26:17.002 | 70.00th=[ 247], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 305], 00:26:17.002 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:26:17.002 | 99.99th=[ 414] 00:26:17.003 bw ( KiB/s): min= 208, max= 384, per=4.26%, avg=281.60, stdev=49.36, samples=20 00:26:17.003 iops : min= 52, max= 96, avg=70.40, stdev=12.34, samples=20 00:26:17.003 lat (msec) : 250=75.28%, 500=24.72% 00:26:17.003 cpu : usr=98.31%, sys=1.34%, ctx=18, majf=0, minf=20 00:26:17.003 IO depths : 1=2.8%, 2=7.5%, 4=20.4%, 8=59.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351986: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=80, BW=323KiB/s (330kB/s)(3264KiB/10120msec) 00:26:17.003 slat (nsec): min=8169, max=96407, avg=21943.42, stdev=16511.76 00:26:17.003 clat (msec): min=69, max=347, avg=198.05, stdev=44.35 00:26:17.003 lat (msec): min=69, max=347, avg=198.07, stdev=44.36 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 82], 5.00th=[ 121], 10.00th=[ 144], 20.00th=[ 165], 00:26:17.003 | 30.00th=[ 174], 40.00th=[ 192], 50.00th=[ 201], 60.00th=[ 211], 00:26:17.003 | 70.00th=[ 224], 80.00th=[ 236], 90.00th=[ 247], 95.00th=[ 255], 00:26:17.003 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:26:17.003 | 99.99th=[ 347] 00:26:17.003 bw ( KiB/s): min= 256, max= 384, per=4.85%, avg=320.00, stdev=61.20, samples=20 00:26:17.003 iops : min= 64, max= 96, avg=80.00, stdev=15.30, samples=20 00:26:17.003 lat (msec) : 100=3.68%, 250=88.73%, 500=7.60% 00:26:17.003 cpu : usr=97.78%, sys=1.62%, ctx=53, majf=0, minf=17 00:26:17.003 IO depths : 1=3.8%, 2=9.8%, 4=24.3%, 8=53.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351987: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10088msec) 00:26:17.003 slat (nsec): min=6381, max=71242, avg=17690.55, stdev=9154.70 00:26:17.003 clat (msec): min=111, max=445, avg=258.50, stdev=53.49 00:26:17.003 lat (msec): min=111, max=445, avg=258.51, stdev=53.48 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 142], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 226], 00:26:17.003 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 275], 00:26:17.003 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 355], 00:26:17.003 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 447], 99.95th=[ 447], 00:26:17.003 | 99.99th=[ 447] 00:26:17.003 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=243.20, stdev=68.00, samples=20 00:26:17.003 iops : min= 32, max= 96, avg=60.80, stdev=17.00, samples=20 00:26:17.003 lat (msec) : 250=44.87%, 500=55.13% 00:26:17.003 cpu : usr=97.23%, sys=1.86%, ctx=51, majf=0, minf=23 00:26:17.003 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351988: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10091msec) 00:26:17.003 slat (usec): min=8, max=157, avg=46.75, stdev=26.71 00:26:17.003 clat (msec): min=110, max=390, avg=239.87, stdev=42.31 00:26:17.003 lat (msec): min=110, max=390, avg=239.92, stdev=42.31 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 180], 20.00th=[ 205], 00:26:17.003 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 249], 00:26:17.003 | 70.00th=[ 262], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 300], 00:26:17.003 | 99.00th=[ 326], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 393], 00:26:17.003 | 99.99th=[ 393] 00:26:17.003 bw ( KiB/s): min= 144, max= 384, per=3.97%, avg=262.40, stdev=46.55, samples=20 00:26:17.003 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:26:17.003 lat (msec) : 250=63.54%, 500=36.46% 00:26:17.003 cpu : usr=96.41%, sys=2.16%, ctx=156, majf=0, minf=15 00:26:17.003 IO depths : 1=3.4%, 2=9.5%, 4=24.6%, 8=53.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351989: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10088msec) 00:26:17.003 slat (usec): min=8, max=194, avg=48.62, stdev=25.05 00:26:17.003 clat (msec): min=161, max=383, avg=258.30, stdev=50.36 00:26:17.003 lat (msec): min=161, max=383, avg=258.34, stdev=50.35 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 163], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 222], 00:26:17.003 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 284], 00:26:17.003 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 342], 00:26:17.003 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:26:17.003 | 99.99th=[ 384] 00:26:17.003 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=243.20, stdev=64.96, samples=20 00:26:17.003 iops : min= 32, max= 96, avg=60.80, stdev=16.24, samples=20 00:26:17.003 lat (msec) : 250=49.20%, 500=50.80% 00:26:17.003 cpu : usr=97.64%, sys=1.65%, ctx=42, majf=0, minf=18 00:26:17.003 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351990: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=85, BW=342KiB/s (350kB/s)(3456KiB/10117msec) 00:26:17.003 slat (usec): min=4, max=157, avg=21.85, stdev=20.80 00:26:17.003 clat (msec): min=59, max=382, avg=187.00, stdev=47.35 00:26:17.003 lat (msec): min=59, max=382, avg=187.03, stdev=47.35 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 60], 5.00th=[ 129], 10.00th=[ 138], 20.00th=[ 150], 00:26:17.003 | 30.00th=[ 167], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 201], 00:26:17.003 | 70.00th=[ 209], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 243], 00:26:17.003 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:26:17.003 | 99.99th=[ 384] 00:26:17.003 bw ( KiB/s): min= 256, max= 512, per=5.14%, avg=339.20, stdev=71.29, samples=20 00:26:17.003 iops : min= 64, max= 128, avg=84.80, stdev=17.82, samples=20 00:26:17.003 lat (msec) : 100=3.70%, 250=92.48%, 500=3.82% 00:26:17.003 cpu : usr=97.73%, sys=1.71%, ctx=34, majf=0, minf=30 00:26:17.003 IO depths : 1=1.5%, 2=7.6%, 4=24.7%, 8=55.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351991: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=61, BW=247KiB/s (253kB/s)(2488KiB/10086msec) 00:26:17.003 slat (usec): min=8, max=144, avg=24.76, stdev= 8.96 00:26:17.003 clat (msec): min=100, max=444, avg=259.18, stdev=58.53 00:26:17.003 lat (msec): min=100, max=444, avg=259.20, stdev=58.53 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 102], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 226], 00:26:17.003 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 275], 00:26:17.003 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 355], 00:26:17.003 | 99.00th=[ 426], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:26:17.003 | 99.99th=[ 447] 00:26:17.003 bw ( KiB/s): min= 128, max= 384, per=3.67%, avg=242.40, stdev=69.31, samples=20 00:26:17.003 iops : min= 32, max= 96, avg=60.60, stdev=17.33, samples=20 00:26:17.003 lat (msec) : 250=44.69%, 500=55.31% 00:26:17.003 cpu : usr=98.13%, sys=1.37%, ctx=27, majf=0, minf=27 00:26:17.003 IO depths : 1=3.4%, 2=9.5%, 4=24.6%, 8=53.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename0: (groupid=0, jobs=1): err= 0: pid=2351992: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10101msec) 00:26:17.003 slat (usec): min=8, max=100, avg=34.07, stdev=21.73 00:26:17.003 clat (msec): min=88, max=435, avg=240.22, stdev=52.14 00:26:17.003 lat (msec): min=88, max=435, avg=240.26, stdev=52.15 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 136], 5.00th=[ 155], 10.00th=[ 167], 20.00th=[ 194], 00:26:17.003 | 30.00th=[ 226], 40.00th=[ 232], 50.00th=[ 241], 60.00th=[ 251], 00:26:17.003 | 70.00th=[ 262], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 317], 00:26:17.003 | 99.00th=[ 359], 99.50th=[ 397], 99.90th=[ 435], 99.95th=[ 435], 00:26:17.003 | 99.99th=[ 435] 00:26:17.003 bw ( KiB/s): min= 128, max= 384, per=3.97%, avg=262.40, stdev=50.44, samples=20 00:26:17.003 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:17.003 lat (msec) : 100=0.60%, 250=58.93%, 500=40.48% 00:26:17.003 cpu : usr=97.93%, sys=1.41%, ctx=54, majf=0, minf=25 00:26:17.003 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:17.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.003 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.003 filename1: (groupid=0, jobs=1): err= 0: pid=2351993: Mon Jul 15 17:49:10 2024 00:26:17.003 read: IOPS=63, BW=253KiB/s (260kB/s)(2560KiB/10101msec) 00:26:17.003 slat (usec): min=9, max=173, avg=62.91, stdev=25.03 00:26:17.003 clat (msec): min=161, max=364, avg=251.96, stdev=48.47 00:26:17.003 lat (msec): min=161, max=364, avg=252.03, stdev=48.48 00:26:17.003 clat percentiles (msec): 00:26:17.003 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 203], 00:26:17.003 | 30.00th=[ 232], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 262], 00:26:17.003 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 321], 00:26:17.003 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:26:17.003 | 99.99th=[ 363] 00:26:17.004 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=249.60, stdev=65.33, samples=20 00:26:17.004 iops : min= 32, max= 96, avg=62.40, stdev=16.33, samples=20 00:26:17.004 lat (msec) : 250=47.50%, 500=52.50% 00:26:17.004 cpu : usr=96.72%, sys=1.92%, ctx=53, majf=0, minf=19 00:26:17.004 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351994: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10087msec) 00:26:17.004 slat (nsec): min=6683, max=56769, avg=22592.27, stdev=10108.98 00:26:17.004 clat (msec): min=125, max=435, avg=239.93, stdev=62.14 00:26:17.004 lat (msec): min=125, max=435, avg=239.96, stdev=62.14 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 142], 5.00th=[ 144], 10.00th=[ 161], 20.00th=[ 174], 00:26:17.004 | 30.00th=[ 207], 40.00th=[ 226], 50.00th=[ 247], 60.00th=[ 251], 00:26:17.004 | 70.00th=[ 266], 80.00th=[ 284], 90.00th=[ 326], 95.00th=[ 363], 00:26:17.004 | 99.00th=[ 384], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 435], 00:26:17.004 | 99.99th=[ 435] 00:26:17.004 bw ( KiB/s): min= 128, max= 384, per=3.97%, avg=262.40, stdev=56.72, samples=20 00:26:17.004 iops : min= 32, max= 96, avg=65.60, stdev=14.18, samples=20 00:26:17.004 lat (msec) : 250=59.23%, 500=40.77% 00:26:17.004 cpu : usr=97.16%, sys=1.84%, ctx=85, majf=0, minf=25 00:26:17.004 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351995: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10096msec) 00:26:17.004 slat (usec): min=5, max=137, avg=29.37, stdev=14.60 00:26:17.004 clat (msec): min=119, max=392, avg=240.12, stdev=47.34 00:26:17.004 lat (msec): min=119, max=392, avg=240.15, stdev=47.34 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 142], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 194], 00:26:17.004 | 30.00th=[ 222], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 251], 00:26:17.004 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 313], 00:26:17.004 | 99.00th=[ 326], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:26:17.004 | 99.99th=[ 393] 00:26:17.004 bw ( KiB/s): min= 128, max= 384, per=3.97%, avg=262.35, stdev=50.71, samples=20 00:26:17.004 iops : min= 32, max= 96, avg=65.55, stdev=12.68, samples=20 00:26:17.004 lat (msec) : 250=60.27%, 500=39.73% 00:26:17.004 cpu : usr=98.06%, sys=1.40%, ctx=20, majf=0, minf=29 00:26:17.004 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351996: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10102msec) 00:26:17.004 slat (nsec): min=8417, max=99174, avg=26476.61, stdev=16639.10 00:26:17.004 clat (msec): min=125, max=415, avg=234.70, stdev=47.78 00:26:17.004 lat (msec): min=125, max=415, avg=234.73, stdev=47.78 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 150], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 190], 00:26:17.004 | 30.00th=[ 213], 40.00th=[ 226], 50.00th=[ 239], 60.00th=[ 247], 00:26:17.004 | 70.00th=[ 259], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 313], 00:26:17.004 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 414], 00:26:17.004 | 99.99th=[ 414] 00:26:17.004 bw ( KiB/s): min= 128, max= 384, per=4.06%, avg=268.80, stdev=61.33, samples=20 00:26:17.004 iops : min= 32, max= 96, avg=67.20, stdev=15.33, samples=20 00:26:17.004 lat (msec) : 250=64.83%, 500=35.17% 00:26:17.004 cpu : usr=98.11%, sys=1.40%, ctx=35, majf=0, minf=25 00:26:17.004 IO depths : 1=3.3%, 2=9.3%, 4=24.1%, 8=54.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351997: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=69, BW=279KiB/s (285kB/s)(2816KiB/10105msec) 00:26:17.004 slat (usec): min=6, max=224, avg=25.08, stdev=17.92 00:26:17.004 clat (msec): min=75, max=378, avg=229.43, stdev=55.81 00:26:17.004 lat (msec): min=75, max=378, avg=229.46, stdev=55.81 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 77], 5.00th=[ 136], 10.00th=[ 163], 20.00th=[ 192], 00:26:17.004 | 30.00th=[ 209], 40.00th=[ 224], 50.00th=[ 230], 60.00th=[ 247], 00:26:17.004 | 70.00th=[ 255], 80.00th=[ 275], 90.00th=[ 300], 95.00th=[ 313], 00:26:17.004 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 380], 00:26:17.004 | 99.99th=[ 380] 00:26:17.004 bw ( KiB/s): min= 128, max= 512, per=4.17%, avg=275.20, stdev=75.33, samples=20 00:26:17.004 iops : min= 32, max= 128, avg=68.80, stdev=18.83, samples=20 00:26:17.004 lat (msec) : 100=4.55%, 250=61.93%, 500=33.52% 00:26:17.004 cpu : usr=96.90%, sys=2.04%, ctx=33, majf=0, minf=29 00:26:17.004 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351998: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10085msec) 00:26:17.004 slat (usec): min=8, max=134, avg=21.00, stdev=16.69 00:26:17.004 clat (msec): min=134, max=451, avg=258.37, stdev=53.73 00:26:17.004 lat (msec): min=134, max=451, avg=258.39, stdev=53.72 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 222], 00:26:17.004 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 284], 00:26:17.004 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 342], 00:26:17.004 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 451], 99.95th=[ 451], 00:26:17.004 | 99.99th=[ 451] 00:26:17.004 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=243.20, stdev=69.37, samples=20 00:26:17.004 iops : min= 32, max= 96, avg=60.80, stdev=17.34, samples=20 00:26:17.004 lat (msec) : 250=49.68%, 500=50.32% 00:26:17.004 cpu : usr=97.50%, sys=1.67%, ctx=52, majf=0, minf=18 00:26:17.004 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2351999: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10101msec) 00:26:17.004 slat (nsec): min=8947, max=98364, avg=40776.69, stdev=22931.61 00:26:17.004 clat (msec): min=135, max=347, avg=240.21, stdev=38.73 00:26:17.004 lat (msec): min=135, max=347, avg=240.25, stdev=38.72 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 180], 20.00th=[ 213], 00:26:17.004 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:26:17.004 | 70.00th=[ 262], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 296], 00:26:17.004 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:26:17.004 | 99.99th=[ 347] 00:26:17.004 bw ( KiB/s): min= 144, max= 384, per=3.97%, avg=262.40, stdev=46.55, samples=20 00:26:17.004 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:26:17.004 lat (msec) : 250=62.35%, 500=37.65% 00:26:17.004 cpu : usr=98.33%, sys=1.23%, ctx=90, majf=0, minf=26 00:26:17.004 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename1: (groupid=0, jobs=1): err= 0: pid=2352000: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=80, BW=320KiB/s (328kB/s)(3240KiB/10120msec) 00:26:17.004 slat (usec): min=8, max=324, avg=31.27, stdev=29.54 00:26:17.004 clat (msec): min=81, max=369, avg=198.83, stdev=50.01 00:26:17.004 lat (msec): min=81, max=369, avg=198.86, stdev=50.01 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 82], 5.00th=[ 111], 10.00th=[ 136], 20.00th=[ 163], 00:26:17.004 | 30.00th=[ 176], 40.00th=[ 192], 50.00th=[ 201], 60.00th=[ 213], 00:26:17.004 | 70.00th=[ 224], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:26:17.004 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:26:17.004 | 99.99th=[ 372] 00:26:17.004 bw ( KiB/s): min= 176, max= 496, per=4.81%, avg=317.60, stdev=71.98, samples=20 00:26:17.004 iops : min= 44, max= 124, avg=79.40, stdev=18.00, samples=20 00:26:17.004 lat (msec) : 100=3.95%, 250=88.52%, 500=7.53% 00:26:17.004 cpu : usr=96.66%, sys=2.14%, ctx=96, majf=0, minf=43 00:26:17.004 IO depths : 1=1.7%, 2=4.9%, 4=15.6%, 8=66.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:17.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.004 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.004 filename2: (groupid=0, jobs=1): err= 0: pid=2352001: Mon Jul 15 17:49:10 2024 00:26:17.004 read: IOPS=75, BW=303KiB/s (310kB/s)(3064KiB/10119msec) 00:26:17.004 slat (usec): min=4, max=270, avg=39.81, stdev=30.76 00:26:17.004 clat (msec): min=59, max=296, avg=210.89, stdev=47.37 00:26:17.004 lat (msec): min=59, max=296, avg=210.93, stdev=47.38 00:26:17.004 clat percentiles (msec): 00:26:17.004 | 1.00th=[ 61], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 169], 00:26:17.004 | 30.00th=[ 197], 40.00th=[ 211], 50.00th=[ 224], 60.00th=[ 228], 00:26:17.004 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 271], 00:26:17.004 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 296], 00:26:17.004 | 99.99th=[ 296] 00:26:17.004 bw ( KiB/s): min= 144, max= 496, per=4.55%, avg=300.00, stdev=79.39, samples=20 00:26:17.004 iops : min= 36, max= 124, avg=75.00, stdev=19.85, samples=20 00:26:17.004 lat (msec) : 100=3.92%, 250=81.46%, 500=14.62% 00:26:17.004 cpu : usr=97.57%, sys=1.64%, ctx=44, majf=0, minf=26 00:26:17.004 IO depths : 1=0.9%, 2=7.2%, 4=25.1%, 8=55.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352002: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10086msec) 00:26:17.005 slat (usec): min=8, max=103, avg=29.72, stdev=17.20 00:26:17.005 clat (msec): min=140, max=445, avg=258.36, stdev=55.18 00:26:17.005 lat (msec): min=140, max=445, avg=258.38, stdev=55.17 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 140], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 226], 00:26:17.005 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 275], 00:26:17.005 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 321], 95.00th=[ 355], 00:26:17.005 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 447], 00:26:17.005 | 99.99th=[ 447] 00:26:17.005 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=243.20, stdev=66.80, samples=20 00:26:17.005 iops : min= 32, max= 96, avg=60.80, stdev=16.70, samples=20 00:26:17.005 lat (msec) : 250=46.15%, 500=53.85% 00:26:17.005 cpu : usr=97.73%, sys=1.41%, ctx=56, majf=0, minf=25 00:26:17.005 IO depths : 1=1.4%, 2=7.5%, 4=24.5%, 8=55.4%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352003: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10101msec) 00:26:17.005 slat (usec): min=8, max=112, avg=37.65, stdev=23.46 00:26:17.005 clat (msec): min=134, max=322, avg=234.60, stdev=40.49 00:26:17.005 lat (msec): min=134, max=322, avg=234.63, stdev=40.49 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 136], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 197], 00:26:17.005 | 30.00th=[ 226], 40.00th=[ 230], 50.00th=[ 241], 60.00th=[ 247], 00:26:17.005 | 70.00th=[ 257], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 292], 00:26:17.005 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:26:17.005 | 99.99th=[ 321] 00:26:17.005 bw ( KiB/s): min= 128, max= 384, per=4.06%, avg=268.80, stdev=57.24, samples=20 00:26:17.005 iops : min= 32, max= 96, avg=67.20, stdev=14.31, samples=20 00:26:17.005 lat (msec) : 250=64.83%, 500=35.17% 00:26:17.005 cpu : usr=97.85%, sys=1.42%, ctx=37, majf=0, minf=18 00:26:17.005 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352004: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10102msec) 00:26:17.005 slat (usec): min=8, max=215, avg=45.63, stdev=28.59 00:26:17.005 clat (msec): min=88, max=425, avg=246.05, stdev=52.59 00:26:17.005 lat (msec): min=88, max=425, avg=246.09, stdev=52.60 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 140], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 194], 00:26:17.005 | 30.00th=[ 230], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 262], 00:26:17.005 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 313], 00:26:17.005 | 99.00th=[ 363], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:26:17.005 | 99.99th=[ 426] 00:26:17.005 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=256.00, stdev=57.10, samples=20 00:26:17.005 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:26:17.005 lat (msec) : 100=0.61%, 250=53.35%, 500=46.04% 00:26:17.005 cpu : usr=96.68%, sys=2.07%, ctx=193, majf=0, minf=22 00:26:17.005 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352005: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10116msec) 00:26:17.005 slat (usec): min=8, max=174, avg=60.85, stdev=25.71 00:26:17.005 clat (msec): min=94, max=455, avg=240.89, stdev=71.62 00:26:17.005 lat (msec): min=94, max=455, avg=240.95, stdev=71.63 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 97], 5.00th=[ 106], 10.00th=[ 142], 20.00th=[ 165], 00:26:17.005 | 30.00th=[ 194], 40.00th=[ 230], 50.00th=[ 249], 60.00th=[ 262], 00:26:17.005 | 70.00th=[ 275], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 351], 00:26:17.005 | 99.00th=[ 435], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 456], 00:26:17.005 | 99.99th=[ 456] 00:26:17.005 bw ( KiB/s): min= 128, max= 512, per=3.96%, avg=261.60, stdev=84.71, samples=20 00:26:17.005 iops : min= 32, max= 128, avg=65.40, stdev=21.18, samples=20 00:26:17.005 lat (msec) : 100=2.69%, 250=52.99%, 500=44.33% 00:26:17.005 cpu : usr=97.27%, sys=1.68%, ctx=79, majf=0, minf=18 00:26:17.005 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352006: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10076msec) 00:26:17.005 slat (usec): min=8, max=130, avg=24.43, stdev=12.03 00:26:17.005 clat (msec): min=129, max=460, avg=258.15, stdev=61.89 00:26:17.005 lat (msec): min=129, max=460, avg=258.17, stdev=61.88 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 146], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 194], 00:26:17.005 | 30.00th=[ 230], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 275], 00:26:17.005 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 355], 00:26:17.005 | 99.00th=[ 439], 99.50th=[ 456], 99.90th=[ 460], 99.95th=[ 460], 00:26:17.005 | 99.99th=[ 460] 00:26:17.005 bw ( KiB/s): min= 128, max= 368, per=3.68%, avg=243.20, stdev=69.18, samples=20 00:26:17.005 iops : min= 32, max= 92, avg=60.80, stdev=17.29, samples=20 00:26:17.005 lat (msec) : 250=45.67%, 500=54.33% 00:26:17.005 cpu : usr=97.95%, sys=1.54%, ctx=24, majf=0, minf=23 00:26:17.005 IO depths : 1=3.0%, 2=9.1%, 4=24.5%, 8=53.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352007: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10097msec) 00:26:17.005 slat (usec): min=8, max=252, avg=29.70, stdev=19.83 00:26:17.005 clat (msec): min=112, max=390, avg=240.14, stdev=49.66 00:26:17.005 lat (msec): min=112, max=390, avg=240.17, stdev=49.66 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 123], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 194], 00:26:17.005 | 30.00th=[ 222], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 249], 00:26:17.005 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 313], 00:26:17.005 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 393], 00:26:17.005 | 99.99th=[ 393] 00:26:17.005 bw ( KiB/s): min= 144, max= 384, per=3.97%, avg=262.40, stdev=46.55, samples=20 00:26:17.005 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:26:17.005 lat (msec) : 250=61.76%, 500=38.24% 00:26:17.005 cpu : usr=97.63%, sys=1.61%, ctx=40, majf=0, minf=19 00:26:17.005 IO depths : 1=2.2%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 filename2: (groupid=0, jobs=1): err= 0: pid=2352008: Mon Jul 15 17:49:10 2024 00:26:17.005 read: IOPS=87, BW=352KiB/s (360kB/s)(3560KiB/10120msec) 00:26:17.005 slat (nsec): min=7033, max=82943, avg=17826.00, stdev=16340.91 00:26:17.005 clat (msec): min=59, max=313, avg=180.94, stdev=40.90 00:26:17.005 lat (msec): min=59, max=313, avg=180.96, stdev=40.90 00:26:17.005 clat percentiles (msec): 00:26:17.005 | 1.00th=[ 61], 5.00th=[ 113], 10.00th=[ 146], 20.00th=[ 155], 00:26:17.005 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 186], 00:26:17.005 | 70.00th=[ 197], 80.00th=[ 207], 90.00th=[ 228], 95.00th=[ 247], 00:26:17.005 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:26:17.005 | 99.99th=[ 313] 00:26:17.005 bw ( KiB/s): min= 256, max= 512, per=5.29%, avg=349.60, stdev=73.65, samples=20 00:26:17.005 iops : min= 64, max= 128, avg=87.40, stdev=18.41, samples=20 00:26:17.005 lat (msec) : 100=3.60%, 250=91.69%, 500=4.72% 00:26:17.005 cpu : usr=97.80%, sys=1.61%, ctx=37, majf=0, minf=26 00:26:17.005 IO depths : 1=0.9%, 2=2.6%, 4=11.0%, 8=73.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:17.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 complete : 0=0.0%, 4=90.1%, 8=4.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.005 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:17.005 00:26:17.005 Run status group 0 (all jobs): 00:26:17.005 READ: bw=6595KiB/s (6754kB/s), 247KiB/s-352KiB/s (253kB/s-360kB/s), io=65.2MiB (68.3MB), run=10076-10120msec 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.005 17:49:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.005 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.005 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.005 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 bdev_null0 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 [2024-07-15 17:49:11.068198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 bdev_null1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.006 { 00:26:17.006 "params": { 00:26:17.006 "name": "Nvme$subsystem", 00:26:17.006 "trtype": "$TEST_TRANSPORT", 00:26:17.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.006 "adrfam": "ipv4", 00:26:17.006 "trsvcid": "$NVMF_PORT", 00:26:17.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.006 "hdgst": ${hdgst:-false}, 00:26:17.006 "ddgst": ${ddgst:-false} 00:26:17.006 }, 00:26:17.006 "method": "bdev_nvme_attach_controller" 00:26:17.006 } 00:26:17.006 EOF 00:26:17.006 )") 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.006 { 00:26:17.006 "params": { 00:26:17.006 "name": "Nvme$subsystem", 00:26:17.006 "trtype": "$TEST_TRANSPORT", 00:26:17.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.006 "adrfam": "ipv4", 00:26:17.006 "trsvcid": "$NVMF_PORT", 00:26:17.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.006 "hdgst": ${hdgst:-false}, 00:26:17.006 "ddgst": ${ddgst:-false} 00:26:17.006 }, 00:26:17.006 "method": "bdev_nvme_attach_controller" 00:26:17.006 } 00:26:17.006 EOF 00:26:17.006 )") 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:17.006 17:49:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.006 "params": { 00:26:17.006 "name": "Nvme0", 00:26:17.006 "trtype": "tcp", 00:26:17.006 "traddr": "10.0.0.2", 00:26:17.006 "adrfam": "ipv4", 00:26:17.006 "trsvcid": "4420", 00:26:17.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.006 "hdgst": false, 00:26:17.006 "ddgst": false 00:26:17.006 }, 00:26:17.006 "method": "bdev_nvme_attach_controller" 00:26:17.006 },{ 00:26:17.006 "params": { 00:26:17.006 "name": "Nvme1", 00:26:17.006 "trtype": "tcp", 00:26:17.007 "traddr": "10.0.0.2", 00:26:17.007 "adrfam": "ipv4", 00:26:17.007 "trsvcid": "4420", 00:26:17.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.007 "hdgst": false, 00:26:17.007 "ddgst": false 00:26:17.007 }, 00:26:17.007 "method": "bdev_nvme_attach_controller" 00:26:17.007 }' 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.007 17:49:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.007 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:17.007 ... 00:26:17.007 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:17.007 ... 00:26:17.007 fio-3.35 00:26:17.007 Starting 4 threads 00:26:17.007 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.268 00:26:22.268 filename0: (groupid=0, jobs=1): err= 0: pid=2353394: Mon Jul 15 17:49:17 2024 00:26:22.268 read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.2MiB/5001msec) 00:26:22.268 slat (nsec): min=5193, max=51754, avg=11672.88, stdev=5079.03 00:26:22.268 clat (usec): min=772, max=7509, avg=4233.85, stdev=701.38 00:26:22.268 lat (usec): min=788, max=7522, avg=4245.52, stdev=700.77 00:26:22.268 clat percentiles (usec): 00:26:22.268 | 1.00th=[ 2540], 5.00th=[ 3523], 10.00th=[ 3687], 20.00th=[ 3851], 00:26:22.268 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:26:22.268 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5932], 00:26:22.268 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7439], 00:26:22.268 | 99.99th=[ 7504] 00:26:22.268 bw ( KiB/s): min=13744, max=16096, per=25.25%, avg=15004.44, stdev=691.10, samples=9 00:26:22.268 iops : min= 1718, max= 2012, avg=1875.56, stdev=86.39, samples=9 00:26:22.268 lat (usec) : 1000=0.04% 00:26:22.268 lat (msec) : 2=0.05%, 4=42.15%, 10=57.75% 00:26:22.268 cpu : usr=95.22%, sys=4.34%, ctx=8, majf=0, minf=0 00:26:22.268 IO depths : 1=0.1%, 2=1.8%, 4=69.2%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 issued rwts: total=9373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:22.268 filename0: (groupid=0, jobs=1): err= 0: pid=2353395: Mon Jul 15 17:49:17 2024 00:26:22.268 read: IOPS=1887, BW=14.7MiB/s (15.5MB/s)(73.8MiB/5003msec) 00:26:22.268 slat (nsec): min=5526, max=47546, avg=11453.73, stdev=4734.65 00:26:22.268 clat (usec): min=801, max=7041, avg=4203.08, stdev=706.09 00:26:22.268 lat (usec): min=817, max=7054, avg=4214.53, stdev=705.59 00:26:22.268 clat percentiles (usec): 00:26:22.268 | 1.00th=[ 2474], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3752], 00:26:22.268 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:26:22.268 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5145], 95.00th=[ 5866], 00:26:22.268 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7046], 00:26:22.268 | 99.99th=[ 7046] 00:26:22.268 bw ( KiB/s): min=14448, max=15792, per=25.40%, avg=15095.11, stdev=481.01, samples=9 00:26:22.268 iops : min= 1806, max= 1974, avg=1886.89, stdev=60.13, samples=9 00:26:22.268 lat (usec) : 1000=0.01% 00:26:22.268 lat (msec) : 2=0.03%, 4=42.39%, 10=57.56% 00:26:22.268 cpu : usr=95.50%, sys=4.04%, ctx=7, majf=0, minf=10 00:26:22.268 IO depths : 1=0.1%, 2=2.5%, 4=68.0%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 issued rwts: total=9445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:22.268 filename1: (groupid=0, jobs=1): err= 0: pid=2353396: Mon Jul 15 17:49:17 2024 00:26:22.268 read: IOPS=1840, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5002msec) 00:26:22.268 slat (nsec): min=5620, max=47287, avg=11203.39, stdev=4748.24 00:26:22.268 clat (usec): min=2019, max=46251, avg=4312.80, stdev=1434.90 00:26:22.268 lat (usec): min=2041, max=46267, avg=4324.00, stdev=1434.64 00:26:22.268 clat percentiles (usec): 00:26:22.268 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3654], 20.00th=[ 3818], 00:26:22.268 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:26:22.268 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 5669], 95.00th=[ 5997], 00:26:22.268 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[46400], 00:26:22.268 | 99.99th=[46400] 00:26:22.268 bw ( KiB/s): min=13691, max=15104, per=24.70%, avg=14682.11, stdev=425.23, samples=9 00:26:22.268 iops : min= 1711, max= 1888, avg=1835.22, stdev=53.26, samples=9 00:26:22.268 lat (msec) : 4=40.20%, 10=59.71%, 50=0.09% 00:26:22.268 cpu : usr=94.90%, sys=4.66%, ctx=10, majf=0, minf=0 00:26:22.268 IO depths : 1=0.1%, 2=1.8%, 4=69.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 issued rwts: total=9206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:22.268 filename1: (groupid=0, jobs=1): err= 0: pid=2353397: Mon Jul 15 17:49:17 2024 00:26:22.268 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5001msec) 00:26:22.268 slat (nsec): min=5131, max=63452, avg=13505.04, stdev=6276.84 00:26:22.268 clat (usec): min=940, max=47228, avg=4334.45, stdev=1486.03 00:26:22.268 lat (usec): min=956, max=47243, avg=4347.95, stdev=1485.34 00:26:22.268 clat percentiles (usec): 00:26:22.268 | 1.00th=[ 3130], 5.00th=[ 3556], 10.00th=[ 3687], 20.00th=[ 3818], 00:26:22.268 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:26:22.268 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 5800], 95.00th=[ 6063], 00:26:22.268 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7767], 99.95th=[47449], 00:26:22.268 | 99.99th=[47449] 00:26:22.268 bw ( KiB/s): min=13515, max=14960, per=24.53%, avg=14579.00, stdev=453.43, samples=9 00:26:22.268 iops : min= 1689, max= 1870, avg=1822.33, stdev=56.79, samples=9 00:26:22.268 lat (usec) : 1000=0.01% 00:26:22.268 lat (msec) : 2=0.09%, 4=41.72%, 10=58.09%, 50=0.09% 00:26:22.268 cpu : usr=94.96%, sys=4.54%, ctx=9, majf=0, minf=9 00:26:22.268 IO depths : 1=0.1%, 2=1.7%, 4=70.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.268 issued rwts: total=9142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.268 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:22.268 00:26:22.268 Run status group 0 (all jobs): 00:26:22.268 READ: bw=58.0MiB/s (60.9MB/s), 14.3MiB/s-14.7MiB/s (15.0MB/s-15.5MB/s), io=290MiB (304MB), run=5001-5003msec 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.268 00:26:22.268 real 0m24.376s 00:26:22.268 user 4m33.831s 00:26:22.268 sys 0m6.801s 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 ************************************ 00:26:22.268 END TEST fio_dif_rand_params 00:26:22.268 ************************************ 00:26:22.268 17:49:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:22.268 17:49:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:22.268 17:49:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:22.268 17:49:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.268 17:49:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.268 ************************************ 00:26:22.269 START TEST fio_dif_digest 00:26:22.269 ************************************ 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.269 bdev_null0 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.269 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.526 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.526 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.526 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.527 [2024-07-15 17:49:17.420705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.527 { 00:26:22.527 "params": { 00:26:22.527 "name": "Nvme$subsystem", 00:26:22.527 "trtype": "$TEST_TRANSPORT", 00:26:22.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.527 "adrfam": "ipv4", 00:26:22.527 "trsvcid": "$NVMF_PORT", 00:26:22.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.527 "hdgst": ${hdgst:-false}, 00:26:22.527 "ddgst": ${ddgst:-false} 00:26:22.527 }, 00:26:22.527 "method": "bdev_nvme_attach_controller" 00:26:22.527 } 00:26:22.527 EOF 00:26:22.527 )") 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.527 "params": { 00:26:22.527 "name": "Nvme0", 00:26:22.527 "trtype": "tcp", 00:26:22.527 "traddr": "10.0.0.2", 00:26:22.527 "adrfam": "ipv4", 00:26:22.527 "trsvcid": "4420", 00:26:22.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.527 "hdgst": true, 00:26:22.527 "ddgst": true 00:26:22.527 }, 00:26:22.527 "method": "bdev_nvme_attach_controller" 00:26:22.527 }' 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.527 17:49:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.785 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:22.785 ... 00:26:22.785 fio-3.35 00:26:22.785 Starting 3 threads 00:26:22.785 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.988 00:26:34.988 filename0: (groupid=0, jobs=1): err= 0: pid=2354277: Mon Jul 15 17:49:28 2024 00:26:34.988 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10049msec) 00:26:34.988 slat (nsec): min=7542, max=43531, avg=15219.16, stdev=3998.78 00:26:34.988 clat (usec): min=10161, max=53460, avg=15724.80, stdev=1765.33 00:26:34.988 lat (usec): min=10173, max=53480, avg=15740.02, stdev=1765.26 00:26:34.988 clat percentiles (usec): 00:26:34.988 | 1.00th=[12125], 5.00th=[13566], 10.00th=[14091], 20.00th=[14615], 00:26:34.988 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:26:34.988 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17957], 00:26:34.988 | 99.00th=[18744], 99.50th=[19530], 99.90th=[50070], 99.95th=[53216], 00:26:34.988 | 99.99th=[53216] 00:26:34.988 bw ( KiB/s): min=23040, max=26880, per=32.28%, avg=24435.20, stdev=869.03, samples=20 00:26:34.988 iops : min= 180, max= 210, avg=190.90, stdev= 6.79, samples=20 00:26:34.988 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:26:34.988 cpu : usr=92.76%, sys=6.77%, ctx=21, majf=0, minf=164 00:26:34.988 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.988 filename0: (groupid=0, jobs=1): err= 0: pid=2354278: Mon Jul 15 17:49:28 2024 00:26:34.988 read: IOPS=204, BW=25.6MiB/s (26.9MB/s)(257MiB/10048msec) 00:26:34.988 slat (nsec): min=7443, max=77951, avg=17308.96, stdev=5193.61 00:26:34.988 clat (usec): min=8557, max=52216, avg=14597.63, stdev=1662.13 00:26:34.988 lat (usec): min=8571, max=52254, avg=14614.94, stdev=1662.39 00:26:34.988 clat percentiles (usec): 00:26:34.988 | 1.00th=[10421], 5.00th=[12649], 10.00th=[13173], 20.00th=[13698], 00:26:34.988 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:26:34.988 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:26:34.988 | 99.00th=[17433], 99.50th=[17433], 99.90th=[18220], 99.95th=[49546], 00:26:34.988 | 99.99th=[52167] 00:26:34.988 bw ( KiB/s): min=25344, max=28672, per=34.77%, avg=26319.45, stdev=679.19, samples=20 00:26:34.988 iops : min= 198, max= 224, avg=205.60, stdev= 5.30, samples=20 00:26:34.988 lat (msec) : 10=0.68%, 20=99.22%, 50=0.05%, 100=0.05% 00:26:34.988 cpu : usr=93.19%, sys=6.32%, ctx=22, majf=0, minf=157 00:26:34.988 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.988 filename0: (groupid=0, jobs=1): err= 0: pid=2354279: Mon Jul 15 17:49:28 2024 00:26:34.988 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(247MiB/10046msec) 00:26:34.988 slat (nsec): min=7309, max=42623, avg=14839.92, stdev=3568.53 00:26:34.988 clat (usec): min=10788, max=58247, avg=15243.82, stdev=2793.82 00:26:34.988 lat (usec): min=10807, max=58260, avg=15258.66, stdev=2793.63 00:26:34.988 clat percentiles (usec): 00:26:34.988 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:26:34.988 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15008], 60.00th=[15401], 00:26:34.988 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:26:34.988 | 99.00th=[18220], 99.50th=[19792], 99.90th=[57410], 99.95th=[58459], 00:26:34.988 | 99.99th=[58459] 00:26:34.988 bw ( KiB/s): min=22784, max=26368, per=33.31%, avg=25216.00, stdev=919.28, samples=20 00:26:34.988 iops : min= 178, max= 206, avg=197.00, stdev= 7.18, samples=20 00:26:34.988 lat (msec) : 20=99.59%, 50=0.05%, 100=0.35% 00:26:34.988 cpu : usr=93.23%, sys=6.30%, ctx=24, majf=0, minf=135 00:26:34.988 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.988 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:34.988 00:26:34.988 Run status group 0 (all jobs): 00:26:34.988 READ: bw=73.9MiB/s (77.5MB/s), 23.8MiB/s-25.6MiB/s (24.9MB/s-26.9MB/s), io=743MiB (779MB), run=10046-10049msec 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.988 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.989 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.989 00:26:34.989 real 0m11.214s 00:26:34.989 user 0m29.179s 00:26:34.989 sys 0m2.217s 00:26:34.989 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.989 17:49:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.989 ************************************ 00:26:34.989 END TEST fio_dif_digest 00:26:34.989 ************************************ 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:34.989 17:49:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:34.989 17:49:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.989 rmmod nvme_tcp 00:26:34.989 rmmod nvme_fabrics 00:26:34.989 rmmod nvme_keyring 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2348080 ']' 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2348080 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2348080 ']' 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2348080 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348080 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348080' 00:26:34.989 killing process with pid 2348080 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2348080 00:26:34.989 17:49:28 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2348080 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:34.989 17:49:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:34.989 Waiting for block devices as requested 00:26:34.989 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:35.248 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:35.248 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:35.507 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:35.507 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:35.507 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:35.507 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:35.507 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:35.766 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:35.766 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:35.766 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:35.766 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:36.025 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:36.025 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:36.025 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:36.025 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:36.283 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:36.283 17:49:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.283 17:49:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.283 17:49:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.283 17:49:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.283 17:49:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.283 17:49:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:36.283 17:49:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.820 17:49:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.820 00:26:38.820 real 1m6.670s 00:26:38.820 user 6m30.462s 00:26:38.820 sys 0m18.168s 00:26:38.820 17:49:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.820 17:49:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:38.820 ************************************ 00:26:38.820 END TEST nvmf_dif 00:26:38.820 ************************************ 00:26:38.820 17:49:33 -- common/autotest_common.sh@1142 -- # return 0 00:26:38.820 17:49:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:38.820 17:49:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:38.820 17:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.820 17:49:33 -- common/autotest_common.sh@10 -- # set +x 00:26:38.820 ************************************ 00:26:38.820 START TEST nvmf_abort_qd_sizes 00:26:38.820 ************************************ 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:38.820 * Looking for test storage... 00:26:38.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:38.820 17:49:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.821 17:49:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.726 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.726 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.726 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.726 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.726 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:26:40.726 00:26:40.726 --- 10.0.0.2 ping statistics --- 00:26:40.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.726 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:26:40.727 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:40.727 00:26:40.727 --- 10.0.0.1 ping statistics --- 00:26:40.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.727 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:40.727 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.727 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:40.727 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:40.727 17:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.661 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:41.661 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:41.661 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:41.661 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:41.661 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:41.919 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:41.919 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:41.919 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:41.919 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:41.919 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:42.893 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2359059 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2359059 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2359059 ']' 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.893 17:49:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:42.893 [2024-07-15 17:49:37.917193] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:26:42.893 [2024-07-15 17:49:37.917274] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.893 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.893 [2024-07-15 17:49:37.980463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.152 [2024-07-15 17:49:38.095826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.152 [2024-07-15 17:49:38.095900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.152 [2024-07-15 17:49:38.095915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.152 [2024-07-15 17:49:38.095927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.152 [2024-07-15 17:49:38.095936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.152 [2024-07-15 17:49:38.096027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.152 [2024-07-15 17:49:38.096093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.152 [2024-07-15 17:49:38.096121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.152 [2024-07-15 17:49:38.096124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.152 17:49:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:43.412 ************************************ 00:26:43.412 START TEST spdk_target_abort 00:26:43.412 ************************************ 00:26:43.412 17:49:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:43.412 17:49:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:43.412 17:49:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:43.412 17:49:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.412 17:49:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.704 spdk_targetn1 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.704 [2024-07-15 17:49:41.138010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.704 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.705 [2024-07-15 17:49:41.170268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.705 17:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.705 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.989 Initializing NVMe Controllers 00:26:49.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:49.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:49.989 Initialization complete. Launching workers. 00:26:49.989 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9480, failed: 0 00:26:49.989 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 8215 00:26:49.989 success 738, unsuccess 527, failed 0 00:26:49.989 17:49:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.989 17:49:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:49.989 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.271 Initializing NVMe Controllers 00:26:53.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:53.271 Initialization complete. Launching workers. 00:26:53.271 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8678, failed: 0 00:26:53.271 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7448 00:26:53.271 success 369, unsuccess 861, failed 0 00:26:53.271 17:49:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.271 17:49:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.271 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.821 Initializing NVMe Controllers 00:26:55.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:55.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:55.821 Initialization complete. Launching workers. 00:26:55.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31718, failed: 0 00:26:55.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2639, failed to submit 29079 00:26:55.821 success 546, unsuccess 2093, failed 0 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.821 17:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2359059 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2359059 ']' 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2359059 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359059 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359059' 00:26:57.197 killing process with pid 2359059 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2359059 00:26:57.197 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2359059 00:26:57.455 00:26:57.455 real 0m14.291s 00:26:57.455 user 0m54.041s 00:26:57.455 sys 0m2.637s 00:26:57.455 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.455 17:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 ************************************ 00:26:57.455 END TEST spdk_target_abort 00:26:57.455 ************************************ 00:26:57.715 17:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:57.715 17:49:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:57.715 17:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:57.715 17:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.715 17:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:57.715 ************************************ 00:26:57.715 START TEST kernel_target_abort 00:26:57.715 ************************************ 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:57.715 17:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:58.649 Waiting for block devices as requested 00:26:58.649 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:58.908 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:58.908 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:59.166 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:59.166 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:59.166 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:59.166 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:59.166 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:59.427 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:59.427 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:59.427 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:59.686 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:59.686 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:59.686 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:59.686 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:59.945 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:59.945 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:59.945 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.205 No valid GPT data, bailing 00:27:00.205 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.205 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:00.206 00:27:00.206 Discovery Log Number of Records 2, Generation counter 2 00:27:00.206 =====Discovery Log Entry 0====== 00:27:00.206 trtype: tcp 00:27:00.206 adrfam: ipv4 00:27:00.206 subtype: current discovery subsystem 00:27:00.206 treq: not specified, sq flow control disable supported 00:27:00.206 portid: 1 00:27:00.206 trsvcid: 4420 00:27:00.206 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.206 traddr: 10.0.0.1 00:27:00.206 eflags: none 00:27:00.206 sectype: none 00:27:00.206 =====Discovery Log Entry 1====== 00:27:00.206 trtype: tcp 00:27:00.206 adrfam: ipv4 00:27:00.206 subtype: nvme subsystem 00:27:00.206 treq: not specified, sq flow control disable supported 00:27:00.206 portid: 1 00:27:00.206 trsvcid: 4420 00:27:00.206 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:00.206 traddr: 10.0.0.1 00:27:00.206 eflags: none 00:27:00.206 sectype: none 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.206 17:49:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.206 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.491 Initializing NVMe Controllers 00:27:03.492 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:03.492 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:03.492 Initialization complete. Launching workers. 00:27:03.492 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30729, failed: 0 00:27:03.492 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30729, failed to submit 0 00:27:03.492 success 0, unsuccess 30729, failed 0 00:27:03.492 17:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.492 17:49:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.492 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.824 Initializing NVMe Controllers 00:27:06.824 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.824 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.824 Initialization complete. Launching workers. 00:27:06.824 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59859, failed: 0 00:27:06.824 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15086, failed to submit 44773 00:27:06.824 success 0, unsuccess 15086, failed 0 00:27:06.824 17:50:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:06.824 17:50:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:06.824 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.114 Initializing NVMe Controllers 00:27:10.114 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:10.114 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:10.114 Initialization complete. Launching workers. 00:27:10.114 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60107, failed: 0 00:27:10.114 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15006, failed to submit 45101 00:27:10.114 success 0, unsuccess 15006, failed 0 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:10.114 17:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:10.680 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:10.680 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:10.680 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:10.940 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:11.880 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:11.880 00:27:11.880 real 0m14.272s 00:27:11.880 user 0m4.871s 00:27:11.880 sys 0m3.412s 00:27:11.880 17:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.880 17:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.880 ************************************ 00:27:11.880 END TEST kernel_target_abort 00:27:11.880 ************************************ 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.880 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.880 rmmod nvme_tcp 00:27:11.881 rmmod nvme_fabrics 00:27:11.881 rmmod nvme_keyring 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2359059 ']' 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2359059 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2359059 ']' 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2359059 00:27:11.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2359059) - No such process 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2359059 is not found' 00:27:11.881 Process with pid 2359059 is not found 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:11.881 17:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:13.255 Waiting for block devices as requested 00:27:13.255 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:13.255 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:13.255 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:13.255 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:13.514 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:13.514 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:13.514 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:13.514 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:13.773 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:13.773 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:13.773 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:13.773 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:14.031 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:14.031 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:14.031 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:14.031 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:14.292 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:14.292 17:50:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.827 17:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.827 00:27:16.827 real 0m37.940s 00:27:16.827 user 1m1.000s 00:27:16.827 sys 0m9.404s 00:27:16.827 17:50:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.827 17:50:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:16.827 ************************************ 00:27:16.827 END TEST nvmf_abort_qd_sizes 00:27:16.827 ************************************ 00:27:16.827 17:50:11 -- common/autotest_common.sh@1142 -- # return 0 00:27:16.827 17:50:11 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:16.827 17:50:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:16.827 17:50:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.827 17:50:11 -- common/autotest_common.sh@10 -- # set +x 00:27:16.827 ************************************ 00:27:16.827 START TEST keyring_file 00:27:16.827 ************************************ 00:27:16.827 17:50:11 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:16.827 * Looking for test storage... 00:27:16.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:16.827 17:50:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:16.827 17:50:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.827 17:50:11 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.827 17:50:11 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.827 17:50:11 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.827 17:50:11 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.827 17:50:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.827 17:50:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.827 17:50:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.827 17:50:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:16.828 17:50:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pe6iJDHv15 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pe6iJDHv15 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pe6iJDHv15 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pe6iJDHv15 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JBpOVfPxV7 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:16.828 17:50:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JBpOVfPxV7 00:27:16.828 17:50:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JBpOVfPxV7 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JBpOVfPxV7 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=2364832 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:16.828 17:50:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2364832 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2364832 ']' 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.828 17:50:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:16.828 [2024-07-15 17:50:11.636609] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:27:16.828 [2024-07-15 17:50:11.636694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364832 ] 00:27:16.828 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.828 [2024-07-15 17:50:11.699043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.828 [2024-07-15 17:50:11.815435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.761 17:50:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.761 17:50:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:17.761 17:50:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:17.761 17:50:12 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.761 17:50:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.761 [2024-07-15 17:50:12.614268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.761 null0 00:27:17.761 [2024-07-15 17:50:12.646296] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:17.761 [2024-07-15 17:50:12.646734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:17.762 [2024-07-15 17:50:12.654311] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.762 17:50:12 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.762 [2024-07-15 17:50:12.662324] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:17.762 request: 00:27:17.762 { 00:27:17.762 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.762 "secure_channel": false, 00:27:17.762 "listen_address": { 00:27:17.762 "trtype": "tcp", 00:27:17.762 "traddr": "127.0.0.1", 00:27:17.762 "trsvcid": "4420" 00:27:17.762 }, 00:27:17.762 "method": "nvmf_subsystem_add_listener", 00:27:17.762 "req_id": 1 00:27:17.762 } 00:27:17.762 Got JSON-RPC error response 00:27:17.762 response: 00:27:17.762 { 00:27:17.762 "code": -32602, 00:27:17.762 "message": "Invalid parameters" 00:27:17.762 } 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:17.762 17:50:12 keyring_file -- keyring/file.sh@46 -- # bperfpid=2364970 00:27:17.762 17:50:12 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:17.762 17:50:12 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2364970 /var/tmp/bperf.sock 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2364970 ']' 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.762 17:50:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.762 [2024-07-15 17:50:12.709645] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:27:17.762 [2024-07-15 17:50:12.709707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364970 ] 00:27:17.762 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.762 [2024-07-15 17:50:12.771368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.762 [2024-07-15 17:50:12.887259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.020 17:50:13 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.020 17:50:13 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:18.020 17:50:13 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:18.020 17:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:18.285 17:50:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JBpOVfPxV7 00:27:18.285 17:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JBpOVfPxV7 00:27:18.545 17:50:13 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:18.545 17:50:13 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:18.545 17:50:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.545 17:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:18.545 17:50:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:18.803 17:50:13 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.pe6iJDHv15 == \/\t\m\p\/\t\m\p\.\p\e\6\i\J\D\H\v\1\5 ]] 00:27:18.803 17:50:13 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:18.803 17:50:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:18.803 17:50:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:18.803 17:50:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:18.803 17:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.061 17:50:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JBpOVfPxV7 == \/\t\m\p\/\t\m\p\.\J\B\p\O\V\f\P\x\V\7 ]] 00:27:19.061 17:50:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:19.061 17:50:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:19.061 17:50:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.061 17:50:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.061 17:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.061 17:50:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:19.320 17:50:14 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:19.320 17:50:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:19.320 17:50:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:19.320 17:50:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.320 17:50:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.320 17:50:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:19.320 17:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.578 17:50:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:19.578 17:50:14 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:19.578 17:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:19.836 [2024-07-15 17:50:14.732948] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:19.836 nvme0n1 00:27:19.836 17:50:14 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:19.836 17:50:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:19.836 17:50:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:19.836 17:50:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.836 17:50:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:19.836 17:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.094 17:50:15 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:20.094 17:50:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:20.094 17:50:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:20.094 17:50:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:20.094 17:50:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:20.094 17:50:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:20.094 17:50:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.352 17:50:15 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:20.352 17:50:15 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.352 Running I/O for 1 seconds... 00:27:21.727 00:27:21.727 Latency(us) 00:27:21.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.727 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:21.727 nvme0n1 : 1.03 4334.61 16.93 0.00 0.00 29134.32 4126.34 35923.44 00:27:21.727 =================================================================================================================== 00:27:21.727 Total : 4334.61 16.93 0.00 0.00 29134.32 4126.34 35923.44 00:27:21.727 0 00:27:21.727 17:50:16 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:21.727 17:50:16 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:21.727 17:50:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.985 17:50:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:21.985 17:50:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:21.985 17:50:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:21.985 17:50:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:21.985 17:50:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:21.985 17:50:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:21.985 17:50:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.244 17:50:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:22.244 17:50:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:22.244 17:50:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:22.244 17:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:22.502 [2024-07-15 17:50:17.466346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:22.502 [2024-07-15 17:50:17.467255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c9a0 (107): Transport endpoint is not connected 00:27:22.502 [2024-07-15 17:50:17.468246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c9a0 (9): Bad file descriptor 00:27:22.502 [2024-07-15 17:50:17.469256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:22.502 [2024-07-15 17:50:17.469276] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:22.502 [2024-07-15 17:50:17.469300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:22.502 request: 00:27:22.502 { 00:27:22.502 "name": "nvme0", 00:27:22.502 "trtype": "tcp", 00:27:22.502 "traddr": "127.0.0.1", 00:27:22.502 "adrfam": "ipv4", 00:27:22.502 "trsvcid": "4420", 00:27:22.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.502 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:22.502 "prchk_reftag": false, 00:27:22.502 "prchk_guard": false, 00:27:22.502 "hdgst": false, 00:27:22.502 "ddgst": false, 00:27:22.502 "psk": "key1", 00:27:22.502 "method": "bdev_nvme_attach_controller", 00:27:22.502 "req_id": 1 00:27:22.502 } 00:27:22.502 Got JSON-RPC error response 00:27:22.502 response: 00:27:22.502 { 00:27:22.502 "code": -5, 00:27:22.502 "message": "Input/output error" 00:27:22.502 } 00:27:22.502 17:50:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:22.502 17:50:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:22.502 17:50:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:22.502 17:50:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:22.502 17:50:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:22.502 17:50:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:22.502 17:50:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.502 17:50:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.502 17:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:22.502 17:50:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:22.760 17:50:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:22.760 17:50:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:22.760 17:50:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:22.760 17:50:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:22.760 17:50:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:22.760 17:50:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:22.760 17:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.025 17:50:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:23.025 17:50:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:23.025 17:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:23.342 17:50:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:23.342 17:50:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:23.600 17:50:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:23.600 17:50:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:23.600 17:50:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:23.858 17:50:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:23.858 17:50:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:23.858 [2024-07-15 17:50:18.962196] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pe6iJDHv15': 0100660 00:27:23.858 [2024-07-15 17:50:18.962250] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:23.858 request: 00:27:23.858 { 00:27:23.858 "name": "key0", 00:27:23.858 "path": "/tmp/tmp.pe6iJDHv15", 00:27:23.858 "method": "keyring_file_add_key", 00:27:23.858 "req_id": 1 00:27:23.858 } 00:27:23.858 Got JSON-RPC error response 00:27:23.858 response: 00:27:23.858 { 00:27:23.858 "code": -1, 00:27:23.858 "message": "Operation not permitted" 00:27:23.858 } 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.858 17:50:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.858 17:50:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:23.858 17:50:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pe6iJDHv15 00:27:24.424 17:50:19 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.pe6iJDHv15 00:27:24.424 17:50:19 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:24.424 17:50:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:24.424 17:50:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.424 17:50:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:24.424 17:50:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:24.683 [2024-07-15 17:50:19.740348] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pe6iJDHv15': No such file or directory 00:27:24.683 [2024-07-15 17:50:19.740385] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:24.683 [2024-07-15 17:50:19.740426] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:24.683 [2024-07-15 17:50:19.740439] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:24.683 [2024-07-15 17:50:19.740453] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:24.683 request: 00:27:24.683 { 00:27:24.683 "name": "nvme0", 00:27:24.683 "trtype": "tcp", 00:27:24.683 "traddr": "127.0.0.1", 00:27:24.683 "adrfam": "ipv4", 00:27:24.683 "trsvcid": "4420", 00:27:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.683 "prchk_reftag": false, 00:27:24.683 "prchk_guard": false, 00:27:24.683 "hdgst": false, 00:27:24.683 "ddgst": false, 00:27:24.683 "psk": "key0", 00:27:24.683 "method": "bdev_nvme_attach_controller", 00:27:24.683 "req_id": 1 00:27:24.683 } 00:27:24.683 Got JSON-RPC error response 00:27:24.683 response: 00:27:24.683 { 00:27:24.683 "code": -19, 00:27:24.683 "message": "No such device" 00:27:24.683 } 00:27:24.683 17:50:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:24.683 17:50:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.683 17:50:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.683 17:50:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.683 17:50:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:24.683 17:50:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:24.942 17:50:20 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4TLPqp2Tc1 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:24.942 17:50:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4TLPqp2Tc1 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4TLPqp2Tc1 00:27:24.942 17:50:20 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4TLPqp2Tc1 00:27:24.942 17:50:20 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4TLPqp2Tc1 00:27:24.942 17:50:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4TLPqp2Tc1 00:27:25.199 17:50:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.199 17:50:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:25.766 nvme0n1 00:27:25.766 17:50:20 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:25.766 17:50:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:25.766 17:50:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:25.766 17:50:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:26.024 17:50:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:26.024 17:50:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:26.024 17:50:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.024 17:50:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.024 17:50:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:26.282 17:50:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:26.282 17:50:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:26.282 17:50:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:26.282 17:50:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:26.282 17:50:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:26.282 17:50:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.282 17:50:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:26.540 17:50:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:26.540 17:50:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:26.540 17:50:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:26.797 17:50:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:26.797 17:50:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:26.797 17:50:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:27.055 17:50:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:27.055 17:50:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4TLPqp2Tc1 00:27:27.055 17:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4TLPqp2Tc1 00:27:27.313 17:50:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JBpOVfPxV7 00:27:27.313 17:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JBpOVfPxV7 00:27:27.571 17:50:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.571 17:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:27.829 nvme0n1 00:27:28.087 17:50:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:28.087 17:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:28.345 17:50:23 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:28.345 "subsystems": [ 00:27:28.345 { 00:27:28.345 "subsystem": "keyring", 00:27:28.345 "config": [ 00:27:28.345 { 00:27:28.345 "method": "keyring_file_add_key", 00:27:28.345 "params": { 00:27:28.345 "name": "key0", 00:27:28.345 "path": "/tmp/tmp.4TLPqp2Tc1" 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "keyring_file_add_key", 00:27:28.345 "params": { 00:27:28.345 "name": "key1", 00:27:28.345 "path": "/tmp/tmp.JBpOVfPxV7" 00:27:28.345 } 00:27:28.345 } 00:27:28.345 ] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "iobuf", 00:27:28.345 "config": [ 00:27:28.345 { 00:27:28.345 "method": "iobuf_set_options", 00:27:28.345 "params": { 00:27:28.345 "small_pool_count": 8192, 00:27:28.345 "large_pool_count": 1024, 00:27:28.345 "small_bufsize": 8192, 00:27:28.345 "large_bufsize": 135168 00:27:28.345 } 00:27:28.345 } 00:27:28.345 ] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "sock", 00:27:28.345 "config": [ 00:27:28.345 { 00:27:28.345 "method": "sock_set_default_impl", 00:27:28.345 "params": { 00:27:28.345 "impl_name": "posix" 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "sock_impl_set_options", 00:27:28.345 "params": { 00:27:28.345 "impl_name": "ssl", 00:27:28.345 "recv_buf_size": 4096, 00:27:28.345 "send_buf_size": 4096, 00:27:28.345 "enable_recv_pipe": true, 00:27:28.345 "enable_quickack": false, 00:27:28.345 "enable_placement_id": 0, 00:27:28.345 "enable_zerocopy_send_server": true, 00:27:28.345 "enable_zerocopy_send_client": false, 00:27:28.345 "zerocopy_threshold": 0, 00:27:28.345 "tls_version": 0, 00:27:28.345 "enable_ktls": false 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "sock_impl_set_options", 00:27:28.345 "params": { 00:27:28.345 "impl_name": "posix", 00:27:28.345 "recv_buf_size": 2097152, 00:27:28.345 "send_buf_size": 2097152, 00:27:28.345 "enable_recv_pipe": true, 00:27:28.345 "enable_quickack": false, 00:27:28.345 "enable_placement_id": 0, 00:27:28.345 "enable_zerocopy_send_server": true, 00:27:28.345 "enable_zerocopy_send_client": false, 00:27:28.345 "zerocopy_threshold": 0, 00:27:28.345 "tls_version": 0, 00:27:28.345 "enable_ktls": false 00:27:28.345 } 00:27:28.345 } 00:27:28.345 ] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "vmd", 00:27:28.345 "config": [] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "accel", 00:27:28.345 "config": [ 00:27:28.345 { 00:27:28.345 "method": "accel_set_options", 00:27:28.345 "params": { 00:27:28.345 "small_cache_size": 128, 00:27:28.345 "large_cache_size": 16, 00:27:28.345 "task_count": 2048, 00:27:28.345 "sequence_count": 2048, 00:27:28.345 "buf_count": 2048 00:27:28.345 } 00:27:28.345 } 00:27:28.345 ] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "bdev", 00:27:28.345 "config": [ 00:27:28.345 { 00:27:28.345 "method": "bdev_set_options", 00:27:28.345 "params": { 00:27:28.345 "bdev_io_pool_size": 65535, 00:27:28.345 "bdev_io_cache_size": 256, 00:27:28.345 "bdev_auto_examine": true, 00:27:28.345 "iobuf_small_cache_size": 128, 00:27:28.345 "iobuf_large_cache_size": 16 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_raid_set_options", 00:27:28.345 "params": { 00:27:28.345 "process_window_size_kb": 1024 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_iscsi_set_options", 00:27:28.345 "params": { 00:27:28.345 "timeout_sec": 30 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_nvme_set_options", 00:27:28.345 "params": { 00:27:28.345 "action_on_timeout": "none", 00:27:28.345 "timeout_us": 0, 00:27:28.345 "timeout_admin_us": 0, 00:27:28.345 "keep_alive_timeout_ms": 10000, 00:27:28.345 "arbitration_burst": 0, 00:27:28.345 "low_priority_weight": 0, 00:27:28.345 "medium_priority_weight": 0, 00:27:28.345 "high_priority_weight": 0, 00:27:28.345 "nvme_adminq_poll_period_us": 10000, 00:27:28.345 "nvme_ioq_poll_period_us": 0, 00:27:28.345 "io_queue_requests": 512, 00:27:28.345 "delay_cmd_submit": true, 00:27:28.345 "transport_retry_count": 4, 00:27:28.345 "bdev_retry_count": 3, 00:27:28.345 "transport_ack_timeout": 0, 00:27:28.345 "ctrlr_loss_timeout_sec": 0, 00:27:28.345 "reconnect_delay_sec": 0, 00:27:28.345 "fast_io_fail_timeout_sec": 0, 00:27:28.345 "disable_auto_failback": false, 00:27:28.345 "generate_uuids": false, 00:27:28.345 "transport_tos": 0, 00:27:28.345 "nvme_error_stat": false, 00:27:28.345 "rdma_srq_size": 0, 00:27:28.345 "io_path_stat": false, 00:27:28.345 "allow_accel_sequence": false, 00:27:28.345 "rdma_max_cq_size": 0, 00:27:28.345 "rdma_cm_event_timeout_ms": 0, 00:27:28.345 "dhchap_digests": [ 00:27:28.345 "sha256", 00:27:28.345 "sha384", 00:27:28.345 "sha512" 00:27:28.345 ], 00:27:28.345 "dhchap_dhgroups": [ 00:27:28.345 "null", 00:27:28.345 "ffdhe2048", 00:27:28.345 "ffdhe3072", 00:27:28.345 "ffdhe4096", 00:27:28.345 "ffdhe6144", 00:27:28.345 "ffdhe8192" 00:27:28.345 ] 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_nvme_attach_controller", 00:27:28.345 "params": { 00:27:28.345 "name": "nvme0", 00:27:28.345 "trtype": "TCP", 00:27:28.345 "adrfam": "IPv4", 00:27:28.345 "traddr": "127.0.0.1", 00:27:28.345 "trsvcid": "4420", 00:27:28.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:28.345 "prchk_reftag": false, 00:27:28.345 "prchk_guard": false, 00:27:28.345 "ctrlr_loss_timeout_sec": 0, 00:27:28.345 "reconnect_delay_sec": 0, 00:27:28.345 "fast_io_fail_timeout_sec": 0, 00:27:28.345 "psk": "key0", 00:27:28.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:28.345 "hdgst": false, 00:27:28.345 "ddgst": false 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_nvme_set_hotplug", 00:27:28.345 "params": { 00:27:28.345 "period_us": 100000, 00:27:28.345 "enable": false 00:27:28.345 } 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "method": "bdev_wait_for_examine" 00:27:28.345 } 00:27:28.345 ] 00:27:28.345 }, 00:27:28.345 { 00:27:28.345 "subsystem": "nbd", 00:27:28.345 "config": [] 00:27:28.346 } 00:27:28.346 ] 00:27:28.346 }' 00:27:28.346 17:50:23 keyring_file -- keyring/file.sh@114 -- # killprocess 2364970 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2364970 ']' 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2364970 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364970 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364970' 00:27:28.346 killing process with pid 2364970 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@967 -- # kill 2364970 00:27:28.346 Received shutdown signal, test time was about 1.000000 seconds 00:27:28.346 00:27:28.346 Latency(us) 00:27:28.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.346 =================================================================================================================== 00:27:28.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.346 17:50:23 keyring_file -- common/autotest_common.sh@972 -- # wait 2364970 00:27:28.604 17:50:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=2366316 00:27:28.604 17:50:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2366316 /var/tmp/bperf.sock 00:27:28.604 17:50:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2366316 ']' 00:27:28.604 17:50:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.604 17:50:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:28.604 17:50:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.604 17:50:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.604 17:50:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:28.604 "subsystems": [ 00:27:28.604 { 00:27:28.604 "subsystem": "keyring", 00:27:28.604 "config": [ 00:27:28.604 { 00:27:28.604 "method": "keyring_file_add_key", 00:27:28.604 "params": { 00:27:28.604 "name": "key0", 00:27:28.604 "path": "/tmp/tmp.4TLPqp2Tc1" 00:27:28.604 } 00:27:28.604 }, 00:27:28.604 { 00:27:28.604 "method": "keyring_file_add_key", 00:27:28.604 "params": { 00:27:28.604 "name": "key1", 00:27:28.604 "path": "/tmp/tmp.JBpOVfPxV7" 00:27:28.604 } 00:27:28.604 } 00:27:28.604 ] 00:27:28.604 }, 00:27:28.605 { 00:27:28.605 "subsystem": "iobuf", 00:27:28.605 "config": [ 00:27:28.605 { 00:27:28.605 "method": "iobuf_set_options", 00:27:28.605 "params": { 00:27:28.605 "small_pool_count": 8192, 00:27:28.605 "large_pool_count": 1024, 00:27:28.605 "small_bufsize": 8192, 00:27:28.605 "large_bufsize": 135168 00:27:28.605 } 00:27:28.605 } 00:27:28.605 ] 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "subsystem": "sock", 00:27:28.605 "config": [ 00:27:28.605 { 00:27:28.605 "method": "sock_set_default_impl", 00:27:28.605 "params": { 00:27:28.605 "impl_name": "posix" 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "sock_impl_set_options", 00:27:28.605 "params": { 00:27:28.605 "impl_name": "ssl", 00:27:28.605 "recv_buf_size": 4096, 00:27:28.605 "send_buf_size": 4096, 00:27:28.605 "enable_recv_pipe": true, 00:27:28.605 "enable_quickack": false, 00:27:28.605 "enable_placement_id": 0, 00:27:28.605 "enable_zerocopy_send_server": true, 00:27:28.605 "enable_zerocopy_send_client": false, 00:27:28.605 "zerocopy_threshold": 0, 00:27:28.605 "tls_version": 0, 00:27:28.605 "enable_ktls": false 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "sock_impl_set_options", 00:27:28.605 "params": { 00:27:28.605 "impl_name": "posix", 00:27:28.605 "recv_buf_size": 2097152, 00:27:28.605 "send_buf_size": 2097152, 00:27:28.605 "enable_recv_pipe": true, 00:27:28.605 "enable_quickack": false, 00:27:28.605 "enable_placement_id": 0, 00:27:28.605 "enable_zerocopy_send_server": true, 00:27:28.605 "enable_zerocopy_send_client": false, 00:27:28.605 "zerocopy_threshold": 0, 00:27:28.605 "tls_version": 0, 00:27:28.605 "enable_ktls": false 00:27:28.605 } 00:27:28.605 } 00:27:28.605 ] 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "subsystem": "vmd", 00:27:28.605 "config": [] 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "subsystem": "accel", 00:27:28.605 "config": [ 00:27:28.605 { 00:27:28.605 "method": "accel_set_options", 00:27:28.605 "params": { 00:27:28.605 "small_cache_size": 128, 00:27:28.605 "large_cache_size": 16, 00:27:28.605 "task_count": 2048, 00:27:28.605 "sequence_count": 2048, 00:27:28.605 "buf_count": 2048 00:27:28.605 } 00:27:28.605 } 00:27:28.605 ] 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "subsystem": "bdev", 00:27:28.605 "config": [ 00:27:28.605 { 00:27:28.605 "method": "bdev_set_options", 00:27:28.605 "params": { 00:27:28.605 "bdev_io_pool_size": 65535, 00:27:28.605 "bdev_io_cache_size": 256, 00:27:28.605 "bdev_auto_examine": true, 00:27:28.605 "iobuf_small_cache_size": 128, 00:27:28.605 "iobuf_large_cache_size": 16 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_raid_set_options", 00:27:28.605 "params": { 00:27:28.605 "process_window_size_kb": 1024 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_iscsi_set_options", 00:27:28.605 "params": { 00:27:28.605 "timeout_sec": 30 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_nvme_set_options", 00:27:28.605 "params": { 00:27:28.605 "action_on_timeout": "none", 00:27:28.605 "timeout_us": 0, 00:27:28.605 "timeout_admin_us": 0, 00:27:28.605 "keep_alive_timeout_ms": 10000, 00:27:28.605 "arbitration_burst": 0, 00:27:28.605 "low_priority_weight": 0, 00:27:28.605 "medium_priority_weight": 0, 00:27:28.605 "high_priority_weight": 0, 00:27:28.605 "nvme_adminq_poll_period_us": 10000, 00:27:28.605 "nvme_ioq_poll_period_us": 0, 00:27:28.605 "io_queue_requests": 512, 00:27:28.605 "delay_cmd_submit": true, 00:27:28.605 "transport_retry_count": 4, 00:27:28.605 "bdev_retry_count": 3, 00:27:28.605 "transport_ack_timeout": 0, 00:27:28.605 "ctrlr_loss_timeout_sec": 0, 00:27:28.605 "reconnect_delay_sec": 0, 00:27:28.605 "fast_io_fail_timeout_sec": 0, 00:27:28.605 "disable_auto_failback": false, 00:27:28.605 "generate_uuids": false, 00:27:28.605 "transport_tos": 0, 00:27:28.605 "nvme_error_stat": false, 00:27:28.605 "rdma_srq_size": 0, 00:27:28.605 "io_path_stat": false, 00:27:28.605 "allow_accel_sequence": false, 00:27:28.605 "rdma_max_cq_size": 0, 00:27:28.605 "rdma_cm_event_timeout_ms": 0, 00:27:28.605 "dhchap_digests": [ 00:27:28.605 "sha256", 00:27:28.605 "sha384", 00:27:28.605 "sha512" 00:27:28.605 ], 00:27:28.605 "dhchap_dhgroups": [ 00:27:28.605 "null", 00:27:28.605 "ffdhe2048", 00:27:28.605 "ffdhe3072", 00:27:28.605 "ffdhe4096", 00:27:28.605 "ffdhe6144", 00:27:28.605 "ffdhe8192" 00:27:28.605 ] 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_nvme_attach_controller", 00:27:28.605 "params": { 00:27:28.605 "name": "nvme0", 00:27:28.605 "trtype": "TCP", 00:27:28.605 "adrfam": "IPv4", 00:27:28.605 "traddr": "127.0.0.1", 00:27:28.605 "trsvcid": "4420", 00:27:28.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:28.605 "prchk_reftag": false, 00:27:28.605 "prchk_guard": false, 00:27:28.605 "ctrlr_loss_timeout_sec": 0, 00:27:28.605 "reconnect_delay_sec": 0, 00:27:28.605 "fast_io_fail_timeout_sec": 0, 00:27:28.605 "psk": "key0", 00:27:28.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:28.605 "hdgst": false, 00:27:28.605 "ddgst": false 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_nvme_set_hotplug", 00:27:28.605 "params": { 00:27:28.605 "period_us": 100000, 00:27:28.605 "enable": false 00:27:28.605 } 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "method": "bdev_wait_for_examine" 00:27:28.605 } 00:27:28.605 ] 00:27:28.605 }, 00:27:28.605 { 00:27:28.605 "subsystem": "nbd", 00:27:28.605 "config": [] 00:27:28.605 } 00:27:28.605 ] 00:27:28.605 }' 00:27:28.605 17:50:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.605 17:50:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.605 [2024-07-15 17:50:23.593078] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:27:28.605 [2024-07-15 17:50:23.593155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366316 ] 00:27:28.605 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.605 [2024-07-15 17:50:23.654129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.863 [2024-07-15 17:50:23.771575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.864 [2024-07-15 17:50:23.958892] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.435 17:50:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.435 17:50:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:29.435 17:50:24 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:29.435 17:50:24 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:29.435 17:50:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.691 17:50:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:29.691 17:50:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:29.691 17:50:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.691 17:50:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.691 17:50:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.691 17:50:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.691 17:50:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.947 17:50:25 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:29.947 17:50:25 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:29.947 17:50:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:29.947 17:50:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.947 17:50:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.947 17:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.947 17:50:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.204 17:50:25 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:30.204 17:50:25 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:30.204 17:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:30.204 17:50:25 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:30.460 17:50:25 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:30.460 17:50:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:30.460 17:50:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4TLPqp2Tc1 /tmp/tmp.JBpOVfPxV7 00:27:30.460 17:50:25 keyring_file -- keyring/file.sh@20 -- # killprocess 2366316 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2366316 ']' 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2366316 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366316 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366316' 00:27:30.460 killing process with pid 2366316 00:27:30.460 17:50:25 keyring_file -- common/autotest_common.sh@967 -- # kill 2366316 00:27:30.460 Received shutdown signal, test time was about 1.000000 seconds 00:27:30.461 00:27:30.461 Latency(us) 00:27:30.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.461 =================================================================================================================== 00:27:30.461 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:30.461 17:50:25 keyring_file -- common/autotest_common.sh@972 -- # wait 2366316 00:27:30.717 17:50:25 keyring_file -- keyring/file.sh@21 -- # killprocess 2364832 00:27:30.717 17:50:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2364832 ']' 00:27:30.717 17:50:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2364832 00:27:30.717 17:50:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:30.717 17:50:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.717 17:50:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2364832 00:27:30.974 17:50:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:30.974 17:50:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:30.974 17:50:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2364832' 00:27:30.974 killing process with pid 2364832 00:27:30.974 17:50:25 keyring_file -- common/autotest_common.sh@967 -- # kill 2364832 00:27:30.974 [2024-07-15 17:50:25.855223] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:30.974 17:50:25 keyring_file -- common/autotest_common.sh@972 -- # wait 2364832 00:27:31.233 00:27:31.233 real 0m14.874s 00:27:31.233 user 0m36.076s 00:27:31.233 sys 0m3.251s 00:27:31.233 17:50:26 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.233 17:50:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:31.233 ************************************ 00:27:31.233 END TEST keyring_file 00:27:31.233 ************************************ 00:27:31.233 17:50:26 -- common/autotest_common.sh@1142 -- # return 0 00:27:31.233 17:50:26 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:31.233 17:50:26 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:31.233 17:50:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.233 17:50:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.233 17:50:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.233 ************************************ 00:27:31.233 START TEST keyring_linux 00:27:31.233 ************************************ 00:27:31.233 17:50:26 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:31.492 * Looking for test storage... 00:27:31.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.492 17:50:26 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.492 17:50:26 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.492 17:50:26 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.492 17:50:26 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.492 17:50:26 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.492 17:50:26 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.492 17:50:26 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:31.492 17:50:26 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:31.492 /tmp/:spdk-test:key0 00:27:31.492 17:50:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:31.492 17:50:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:31.492 17:50:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:31.493 /tmp/:spdk-test:key1 00:27:31.493 17:50:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2366793 00:27:31.493 17:50:26 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:31.493 17:50:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2366793 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2366793 ']' 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.493 17:50:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:31.493 [2024-07-15 17:50:26.562026] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:27:31.493 [2024-07-15 17:50:26.562112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366793 ] 00:27:31.493 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.493 [2024-07-15 17:50:26.619002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.751 [2024-07-15 17:50:26.728496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.008 17:50:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.008 17:50:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:32.008 17:50:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:32.008 17:50:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.008 17:50:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:32.008 [2024-07-15 17:50:26.994475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.008 null0 00:27:32.008 [2024-07-15 17:50:27.026530] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.008 [2024-07-15 17:50:27.027040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:32.008 17:50:27 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.008 17:50:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:32.008 758800115 00:27:32.008 17:50:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:32.008 582290470 00:27:32.008 17:50:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2366811 00:27:32.008 17:50:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:32.008 17:50:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2366811 /var/tmp/bperf.sock 00:27:32.008 17:50:27 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2366811 ']' 00:27:32.009 17:50:27 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.009 17:50:27 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.009 17:50:27 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.009 17:50:27 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.009 17:50:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:32.009 [2024-07-15 17:50:27.093838] Starting SPDK v24.09-pre git sha1 d8f06a5fe / DPDK 24.03.0 initialization... 00:27:32.009 [2024-07-15 17:50:27.093943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366811 ] 00:27:32.009 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.267 [2024-07-15 17:50:27.153991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.267 [2024-07-15 17:50:27.270112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.267 17:50:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.267 17:50:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:32.267 17:50:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:32.267 17:50:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:32.524 17:50:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:32.524 17:50:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.781 17:50:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:32.781 17:50:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:33.038 [2024-07-15 17:50:28.099793] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:33.296 nvme0n1 00:27:33.296 17:50:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:33.296 17:50:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:33.296 17:50:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:33.296 17:50:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:33.296 17:50:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:33.296 17:50:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.554 17:50:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:33.554 17:50:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:33.554 17:50:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:33.554 17:50:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:33.554 17:50:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.554 17:50:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.554 17:50:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@25 -- # sn=758800115 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 758800115 == \7\5\8\8\0\0\1\1\5 ]] 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 758800115 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:33.811 17:50:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:33.811 Running I/O for 1 seconds... 00:27:34.743 00:27:34.743 Latency(us) 00:27:34.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:34.743 nvme0n1 : 1.02 3877.23 15.15 0.00 0.00 32646.08 14078.10 48739.37 00:27:34.743 =================================================================================================================== 00:27:34.743 Total : 3877.23 15.15 0.00 0.00 32646.08 14078.10 48739.37 00:27:34.743 0 00:27:34.743 17:50:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:34.743 17:50:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:35.001 17:50:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:35.001 17:50:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:35.001 17:50:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:35.001 17:50:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:35.001 17:50:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:35.001 17:50:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.258 17:50:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:35.258 17:50:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:35.258 17:50:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:35.258 17:50:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.258 17:50:30 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:35.258 17:50:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:35.516 [2024-07-15 17:50:30.608394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:35.516 [2024-07-15 17:50:30.608566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa3030 (107): Transport endpoint is not connected 00:27:35.516 [2024-07-15 17:50:30.609558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa3030 (9): Bad file descriptor 00:27:35.516 [2024-07-15 17:50:30.610555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:35.516 [2024-07-15 17:50:30.610578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:35.516 [2024-07-15 17:50:30.610594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:35.516 request: 00:27:35.516 { 00:27:35.516 "name": "nvme0", 00:27:35.516 "trtype": "tcp", 00:27:35.516 "traddr": "127.0.0.1", 00:27:35.516 "adrfam": "ipv4", 00:27:35.516 "trsvcid": "4420", 00:27:35.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:35.516 "prchk_reftag": false, 00:27:35.516 "prchk_guard": false, 00:27:35.516 "hdgst": false, 00:27:35.516 "ddgst": false, 00:27:35.516 "psk": ":spdk-test:key1", 00:27:35.516 "method": "bdev_nvme_attach_controller", 00:27:35.516 "req_id": 1 00:27:35.516 } 00:27:35.516 Got JSON-RPC error response 00:27:35.516 response: 00:27:35.516 { 00:27:35.516 "code": -5, 00:27:35.516 "message": "Input/output error" 00:27:35.516 } 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@33 -- # sn=758800115 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 758800115 00:27:35.516 1 links removed 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@33 -- # sn=582290470 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 582290470 00:27:35.516 1 links removed 00:27:35.516 17:50:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2366811 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2366811 ']' 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2366811 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.516 17:50:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366811 00:27:35.774 17:50:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:35.774 17:50:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:35.774 17:50:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366811' 00:27:35.774 killing process with pid 2366811 00:27:35.774 17:50:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 2366811 00:27:35.774 Received shutdown signal, test time was about 1.000000 seconds 00:27:35.774 00:27:35.774 Latency(us) 00:27:35.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.774 =================================================================================================================== 00:27:35.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:35.774 17:50:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 2366811 00:27:36.032 17:50:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2366793 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2366793 ']' 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2366793 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366793 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366793' 00:27:36.032 killing process with pid 2366793 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 2366793 00:27:36.032 17:50:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 2366793 00:27:36.290 00:27:36.290 real 0m5.061s 00:27:36.290 user 0m9.366s 00:27:36.290 sys 0m1.544s 00:27:36.290 17:50:31 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.290 17:50:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:36.290 ************************************ 00:27:36.290 END TEST keyring_linux 00:27:36.290 ************************************ 00:27:36.547 17:50:31 -- common/autotest_common.sh@1142 -- # return 0 00:27:36.547 17:50:31 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:36.547 17:50:31 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:36.548 17:50:31 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:36.548 17:50:31 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:36.548 17:50:31 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:36.548 17:50:31 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:36.548 17:50:31 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:36.548 17:50:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.548 17:50:31 -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 17:50:31 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:36.548 17:50:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:36.548 17:50:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:36.548 17:50:31 -- common/autotest_common.sh@10 -- # set +x 00:27:38.474 INFO: APP EXITING 00:27:38.474 INFO: killing all VMs 00:27:38.474 INFO: killing vhost app 00:27:38.474 INFO: EXIT DONE 00:27:39.408 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:39.408 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:39.408 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:39.408 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:39.408 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:39.408 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:39.408 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:39.408 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:39.408 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:39.408 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:39.408 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:39.408 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:39.408 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:39.408 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:39.408 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:39.408 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:39.408 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:40.786 Cleaning 00:27:40.786 Removing: /var/run/dpdk/spdk0/config 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:40.786 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:40.786 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:40.786 Removing: /var/run/dpdk/spdk1/config 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:40.786 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:40.786 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:40.786 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:40.786 Removing: /var/run/dpdk/spdk2/config 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:40.786 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:40.786 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:40.786 Removing: /var/run/dpdk/spdk3/config 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:40.786 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:40.787 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:40.787 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:40.787 Removing: /var/run/dpdk/spdk4/config 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:40.787 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:40.787 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:40.787 Removing: /dev/shm/bdev_svc_trace.1 00:27:40.787 Removing: /dev/shm/nvmf_trace.0 00:27:40.787 Removing: /dev/shm/spdk_tgt_trace.pid2105746 00:27:40.787 Removing: /var/run/dpdk/spdk0 00:27:40.787 Removing: /var/run/dpdk/spdk1 00:27:40.787 Removing: /var/run/dpdk/spdk2 00:27:40.787 Removing: /var/run/dpdk/spdk3 00:27:40.787 Removing: /var/run/dpdk/spdk4 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2104070 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2104817 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2105746 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2106178 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2106871 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2107013 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2107732 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2107867 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2108109 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2109303 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2110322 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2110546 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2110846 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2111051 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2111244 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2111434 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2111668 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2111908 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2112167 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2115032 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2115309 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2115475 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2115607 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2115930 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116052 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116366 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116484 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116664 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116802 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2116966 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2117092 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2117464 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2117624 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2117932 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118100 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118129 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118313 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118470 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118633 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2118905 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2119058 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2119225 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2119493 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2119652 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2119814 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120087 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120240 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120406 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120675 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120832 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2120994 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2121267 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2121421 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2121590 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2121863 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2122020 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2122304 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2122366 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2122572 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2124745 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2151566 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2154184 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2161152 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2164328 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2166815 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2167338 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2171184 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2175155 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2175157 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2175811 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2176361 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177016 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177536 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177539 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177688 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177823 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2177827 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2178479 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2179104 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2179796 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2180285 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2180502 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2180959 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2181843 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2182565 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2187933 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2188201 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2190710 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2194411 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2196583 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2203039 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2208204 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2209514 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2210178 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2220870 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2223080 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2248524 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2251309 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2252503 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2253816 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2253952 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2253979 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2254112 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2254678 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2255984 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2256733 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2257163 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2258781 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2259342 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2259901 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2262299 00:27:40.787 Removing: /var/run/dpdk/spdk_pid2268200 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2271580 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2275473 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2276420 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2277515 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2280072 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2282422 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2286640 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2286656 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2289527 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2289668 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2289802 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2290082 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2290195 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2292835 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2293283 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2295945 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2297820 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2301353 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2304677 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2311529 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2315992 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2315996 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2328194 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2328606 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2329127 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2329542 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2330117 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2330531 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2330943 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2331462 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2333846 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2334110 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2337899 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2338081 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2339694 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2345344 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2345349 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2348241 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2349557 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2351061 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2351801 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2353336 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2354092 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2359430 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2359761 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2360155 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2361701 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2362101 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2362381 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2364832 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2364970 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2366316 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2366793 00:27:41.046 Removing: /var/run/dpdk/spdk_pid2366811 00:27:41.046 Clean 00:27:41.046 17:50:36 -- common/autotest_common.sh@1451 -- # return 0 00:27:41.046 17:50:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:41.046 17:50:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.046 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:27:41.046 17:50:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:41.046 17:50:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.046 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:27:41.046 17:50:36 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:41.046 17:50:36 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:41.046 17:50:36 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:41.046 17:50:36 -- spdk/autotest.sh@391 -- # hash lcov 00:27:41.046 17:50:36 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:41.046 17:50:36 -- spdk/autotest.sh@393 -- # hostname 00:27:41.046 17:50:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:41.304 geninfo: WARNING: invalid characters removed from testname! 00:28:13.367 17:51:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:14.740 17:51:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:18.949 17:51:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:22.230 17:51:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:26.442 17:51:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:29.724 17:51:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:33.006 17:51:28 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:33.265 17:51:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.265 17:51:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:33.265 17:51:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.265 17:51:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.265 17:51:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.265 17:51:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.265 17:51:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.265 17:51:28 -- paths/export.sh@5 -- $ export PATH 00:28:33.265 17:51:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.265 17:51:28 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:33.265 17:51:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:33.265 17:51:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721058688.XXXXXX 00:28:33.265 17:51:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721058688.SyiUHA 00:28:33.265 17:51:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:33.265 17:51:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:33.265 17:51:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:33.265 17:51:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:33.265 17:51:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:33.265 17:51:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:33.265 17:51:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:33.265 17:51:28 -- common/autotest_common.sh@10 -- $ set +x 00:28:33.265 17:51:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:33.265 17:51:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:33.265 17:51:28 -- pm/common@17 -- $ local monitor 00:28:33.265 17:51:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.265 17:51:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.265 17:51:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.265 17:51:28 -- pm/common@21 -- $ date +%s 00:28:33.265 17:51:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.265 17:51:28 -- pm/common@21 -- $ date +%s 00:28:33.265 17:51:28 -- pm/common@25 -- $ sleep 1 00:28:33.265 17:51:28 -- pm/common@21 -- $ date +%s 00:28:33.265 17:51:28 -- pm/common@21 -- $ date +%s 00:28:33.265 17:51:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721058688 00:28:33.265 17:51:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721058688 00:28:33.265 17:51:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721058688 00:28:33.266 17:51:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721058688 00:28:33.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721058688_collect-vmstat.pm.log 00:28:33.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721058688_collect-cpu-load.pm.log 00:28:33.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721058688_collect-cpu-temp.pm.log 00:28:33.266 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721058688_collect-bmc-pm.bmc.pm.log 00:28:34.201 17:51:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:34.201 17:51:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:34.201 17:51:29 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:34.201 17:51:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:34.201 17:51:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:34.201 17:51:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:34.201 17:51:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:34.201 17:51:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:34.201 17:51:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:34.201 17:51:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:34.201 17:51:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:34.201 17:51:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:34.201 17:51:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:34.201 17:51:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:34.201 17:51:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:34.201 17:51:29 -- pm/common@44 -- $ pid=2377112 00:28:34.201 17:51:29 -- pm/common@50 -- $ kill -TERM 2377112 00:28:34.201 17:51:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:34.201 17:51:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:34.201 17:51:29 -- pm/common@44 -- $ pid=2377114 00:28:34.201 17:51:29 -- pm/common@50 -- $ kill -TERM 2377114 00:28:34.201 17:51:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:34.201 17:51:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:34.201 17:51:29 -- pm/common@44 -- $ pid=2377116 00:28:34.201 17:51:29 -- pm/common@50 -- $ kill -TERM 2377116 00:28:34.201 17:51:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:34.201 17:51:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:34.201 17:51:29 -- pm/common@44 -- $ pid=2377146 00:28:34.201 17:51:29 -- pm/common@50 -- $ sudo -E kill -TERM 2377146 00:28:34.201 + [[ -n 2020543 ]] 00:28:34.201 + sudo kill 2020543 00:28:34.211 [Pipeline] } 00:28:34.228 [Pipeline] // stage 00:28:34.232 [Pipeline] } 00:28:34.246 [Pipeline] // timeout 00:28:34.251 [Pipeline] } 00:28:34.270 [Pipeline] // catchError 00:28:34.275 [Pipeline] } 00:28:34.293 [Pipeline] // wrap 00:28:34.299 [Pipeline] } 00:28:34.316 [Pipeline] // catchError 00:28:34.326 [Pipeline] stage 00:28:34.328 [Pipeline] { (Epilogue) 00:28:34.344 [Pipeline] catchError 00:28:34.346 [Pipeline] { 00:28:34.362 [Pipeline] echo 00:28:34.363 Cleanup processes 00:28:34.370 [Pipeline] sh 00:28:34.655 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:34.655 2377252 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:34.655 2377376 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:34.669 [Pipeline] sh 00:28:34.953 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:34.953 ++ grep -v 'sudo pgrep' 00:28:34.953 ++ awk '{print $1}' 00:28:34.953 + sudo kill -9 2377252 00:28:34.965 [Pipeline] sh 00:28:35.302 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:47.511 [Pipeline] sh 00:28:47.808 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:47.808 Artifacts sizes are good 00:28:47.822 [Pipeline] archiveArtifacts 00:28:47.829 Archiving artifacts 00:28:48.069 [Pipeline] sh 00:28:48.352 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:48.366 [Pipeline] cleanWs 00:28:48.376 [WS-CLEANUP] Deleting project workspace... 00:28:48.376 [WS-CLEANUP] Deferred wipeout is used... 00:28:48.382 [WS-CLEANUP] done 00:28:48.384 [Pipeline] } 00:28:48.406 [Pipeline] // catchError 00:28:48.417 [Pipeline] sh 00:28:48.697 + logger -p user.info -t JENKINS-CI 00:28:48.706 [Pipeline] } 00:28:48.722 [Pipeline] // stage 00:28:48.728 [Pipeline] } 00:28:48.747 [Pipeline] // node 00:28:48.752 [Pipeline] End of Pipeline 00:28:48.787 Finished: SUCCESS